text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Sparse Bayesian time-varying covariance estimation in many dimensions
We address the curse of dimensionality in dynamic covariance estimation by modeling the underlying co-volatility dynamics of a time series vector through latent time-varying stochastic factors. The use of a global-local shrinkage prior for the elements of the factor loadings matrix pulls loadings on superfluous factors towards zero. To demonstrate the merits of the proposed framework, the model is applied to simulated data as well as to daily log-returns of 300 S&P 500 members. Our approach yields precise correlation estimates, strong implied minimum variance portfolio performance and superior forecasting accuracy in terms of log predictive scores when compared to typical benchmarks.
Introduction
The joint analysis of hundreds or even thousands of time series exhibiting a potentially timevarying variance-covariance structure has been on numerous research agendas for well over a decade. In the present paper we aim to strike the indispensable balance between the necessary flexibility and parameter parsimony by using a factor stochastic volatility (SV) model in combination with a global-local shrinkage prior. Our contribution is threefold. First, the proposed * Department of Finance, Accounting and Statistics, WU Vienna University of Economics and Business, Welthandelsplatz 1/D4/4, 1020 Vienna, Austria, +43 1 31336-5593<EMAIL_ADDRESS>approach offers a hybrid cure to the curse of dimensionality by combining parsimony (through imposing a factor structure) with sparsity (through employing computationally efficient absolutely continuous shrinkage priors on the factor loadings). Second, the efficient construction of posterior simulators allows for conducting Bayesian inference and prediction in very high dimensions via carefully crafted Markov chain Monte Carlo (MCMC) methods made available to end-users through the R (R Core Team 2017) package factorstochvol (Kastner 2017).
Third, we show that the proposed method is capable of accurately predicting covariance and precision matrices which we asses via statistical and economic forecast evaluation in several simulation studies and an extensive real-world example.
Concerning factor SV modeling, early key references include Harvey et al. (1994), Pitt and Shephard (1999), and Aguilar and West (2000) which were later picked up and extended by e.g. Philipov and Glickman (2006), Chib et al. (2006), Han (2006), Lopes and Carvalho (2007), Nakajima and West (2013), Zhou et al. (2014), and Ishihara and Omori (2017). While reducing the dimensionality of the problem at hand, models with many factors are still rather rich in parameters. Thus, we further shrink unimportant elements of the factor loadings matrix to zero in an automatic way within a Bayesian framework. This approach is inspired by highdimensional regression problems where the number of parameters frequently exceeds the size of the data. In particular, we adopt the approach brought forward by Caron and Doucet (2008) and Griffin and Brown (2010) who suggest to use a special continuous prior structure -the Normal-Gamma prior -on the regression parameters (in our case the factor loadings matrix).
This shrinkage prior is a generalization of the Bayesian Lasso (Park and Casella 2008) and has recently received attention in the econometrics literature (Bitto and Frühwirth-Schnatter 2016;. Another major issue for such high-dimensional problems is the computational burden that goes along with statistical inference, in particular when joint modeling is attempted instead of multi-step approaches or rolling-window-like estimates. Suggested solutions include Engle and Kelly (2012) who propose an estimator assuming that pairwise correlations are equal at every point in time, Pakel et al. (2014) who consider composite likelihood estimation, Gruber and West (2016) who use a decoupling-recoupling strategy to parallelize estimation (executed on graphical processors), Lopes et al. (2016) who treat the Cholesky-decomposed covariance matrix within the framework of Bayesian time-varying parameter models, and Oh and Patton (2017) who choose a copula-based approach to link separately estimated univariate models. We propose to use a Gibbs-type sampler which allows to jointly take into account both parameter as well as sampling uncertainty in a finite-sample setup through fully Bayesian inference, thereby enabling inherent uncertainty quantification. Additionally, this approach allows for fully probabilistic in-and out-of-sample density predictions.
For related work on sparse Bayesian prior distributions in high dimensions, see e.g. Kaufmann and Schumacher (2013) who use a point mass prior specification for factor loadings in dynamic factor models or Ahelegbey et al. (2016) who use a graphical representation of vector autoregressive models to select sparse graphs. From a mathematical point of view, Pati et al. (2014) investigate posterior contraction rates for a related class of continuous shrinkage priors for static factor models and show excellent performance in terms of posterior rates of convergence with respect to the minimax rate. All of these works, however, assume homoskedasticity and are thus potentially misspecified when applied to financial or economic data. For related methods that take into account heteroskedasticity, see e.g. Nakajima and West (2013) and Nakajima and West (2017) who employ a latent thresholding process to enforce time-varying sparsity. Moreover, Zhao et al. (2016) approach this issue via dependence networks, Loddo et al. (2011) use stochastic search for model selection, and Basturk et al. (2016) use time-varying combinations of dynamic models and equity momentum strategies. These methods are typically very flexible in terms of the dynamics they can capture but are applied to moderate dimensional data only.
We illustrate the merits of our approach through extensive simulation studies and an indepth financial application using 300 S&P 500 members. In simulations, we find considerable evidence that the Normal-Gamma shrinkage prior leads to substantially sparser factor loadings matrices which in turn translate into more precise correlation estimates when compared to the usual Gaussian prior on the loadings. 1 In the real-world application, we evaluate our model against a wide range of alternative specifications via log predictive scores and minimum variance portfolio returns. Factor SV models with sufficiently many factors turn out to imply extremely competitive portfolios in relation to well-established methods which typically have been specifically tailored for such applications. Concerning density forecasts, we find that our approach outperforms all included competitors by a large margin.
The remainder of this paper is structured as follows. In Section 2, the factor SV model is specified and the choice of prior distributions is discussed. Section 3 treats statistical inference via MCMC methods and sheds light on computational aspects concerning out-of-sample density predictions for this model class. Extensive simulation studies are presented in Section 4, where the effect of the Normal-Gamma prior on correlation estimates is investigated in detail. In Section 5, the model is applied to 300 S&P 500 members. Section 6 wraps up and points out possible directions for future research.
Factor SV Model
To reduce dimensionality, factor SV models utilize a decomposition of the m × m covariance matrix Σ t with m(m + 1)/2 free elements into a factor loadings matrix Λ of size m × r, an r-dimensional diagonal matrix V t and an m-dimensional diagonal matrix U t in the following fashion: This reduces the number of free elements to mr + m + r. Because r is typically chosen to be much smaller than m, this specification constrains the parameter space substantially, thereby inducing parameter parsimony. For the paper at hand, Λ is considered to be time invariant whereas the elements of both V t and U t are allowed to evolve over time through parametric stochastic volatility models, i.e. U t = diag(exp(h 1t ), . . . , exp(h mt )) and h m+j,t ∼ N φ m+j h m+j,t−1 , σ 2 m+j , j = 1, . . . , r.
More specifically, U t describes the idiosyncratic (series-specific) variances while V t contains the variances of underlying orthogonal factors f t ∼ N r (0, V t ) that govern the contemporaneous dependence. The autoregressive process in (3) is assumed to have mean zero to identify the unconditional scaling of the factors.
This setup is commonly written in the following hierarchical form (e.g. Chib et al. 2006): where the distributions are assumed to be conditionally independent for all points in time. To make further exposition clearer, let y = (y 1 · · · y T ) denote the m×T matrix of all observations, is referred to as the vector of parameters where µ i is the level, φ i the persistence, and σ 2 i the innovation variance of h i . To denote specific rows and columns of matrices, we use the "dot" notation, i.e. X i· refers to the ith row and X ·j to the jth column of X. The proportions of variances explained through the common factors for each component series, C it = 1 − U ii,t /Σ ii,t for i = 1, . . . , m, are referred to as the communalities. Here, U ii,t and Σ ii,t denote the ith diagonal element of U t and Σ t , respectively. As by construction 0 ≤ U ii,t ≤ Σ ii,t , the communality for each component series and for all points in time lies between zero and one. The joint (overall) communality C t = m −1 m i=1 C it is simply defined as the arithmetic mean over all series.
Three comments are in order. First, the variance-covariance decomposition in (1) can be rewritten as Σ t = Λ t Λ t + U t with Λ t := ΛV 1/2 t . An essential assumption within the factor framework is that both V t as well as U t are diagonal matrices. This implies that the factor loadings Λ t are dynamic but can only vary column-wise over time. Consequently, the time-variability of Σ t 's off-diagonal elements are cross-sectionally restricted while its diagonal elements are allowed to move independently across series. Hence, the "strength" of a factor, i.e. its cross-sectional explanatory power, varies jointly for all series loading on it. Consequently, it is likely that more factors are needed to properly explain the co-volatility dynamics of a multivariate time series than in models which allow for completely unrestricted timevarying factor loadings (Lopes and Carvalho 2007), correlated factors (V t not diagonal, see Zhou et al. 2014), or approximate factor models (U t not diagonal, see Bai and Ng 2002). Our specification, however, is less prone to overfitting and has the significant advantage of vastly simplified computations.
Second, identifying loadings for latent factor models is a long-standing issue that goes back to at least Anderson and Rubin (1956) who discuss identification of factor loadings. Even though this problem is alleviated somewhat when factors are allowed to exhibit conditional heteroskedasticity (Sentana and Fiorentini 2001;Rigobon 2003), most authors have chosen an upper triangular constraint of the loadings matrix with unit diagonal elements, thereby introducing dependence on the ordering of the data (see Frühwirth-Schnatter and Lopes 2017).
However, when estimation of the actual factor loadings is not the primary concern (but rather a means to estimate and predict the covariance structure), this issue is less striking because a unique identification of the loadings matrix is not necessary. 2 This allows leaving the factor loadings matrix completely unrestricted, thus rendering the method invariant with respect to the ordering of the series.
Third, note that even though the joint distribution of the data is conditionally Gaussian, its stationary distribution has thicker tails. Nevertheless, generalizations of the univariate SV model to cater for even more leptokurtic distributions (e.g. Liesenfeld and Jung 2000) or asymmetry (e.g. Yu 2005) can straightforwardly be incorporated in the current framework.
All of these extensions, however, tend to increase both sampling inefficiency as well as running time considerably and could thus preclude inference in very high dimensions.
Prior Distributions
The usual prior for each (unrestricted) element of the factor loadings matrix is a zero-mean Gaussian distribution, i.e. Λ ij ∼ N 0, τ 2 ij independently for each i and j, where τ 2 ij ≡ τ 2 is a constant specified a priori (e.g. Pitt and Shephard 1999;Aguilar and West 2000;Chib et al. 2006;Ishihara and Omori 2017;. To achieve more shrinkage, we model this variance hierarchically by placing a hyperprior on τ 2 ij . This approach is related to Bhattacharya and Dunson (2011) and Pati et al. (2014) who investigate a similar class of priors for homoskedastic factor models. More specifically, let Intuitively, each prior variance τ 2 ij provides element-wise shrinkage governed independently for each row by λ 2 i . Integrating out τ 2 ij yields a density for Λ ij |λ 2 i of the form p(Λ ij |λ 2 where K is the modified Bessel function of the second kind.
This implies that the conditional variance of Λ ij |λ 2 i is 2/λ 2 i and the excess kurtosis of Λ ij is 3/a i . The hyperparameters a i , c i , and d i are fixed a priori, whereas a i in particular plays a crucial role for the amount of shrinkage this prior implies. Choosing a i small enforces strong 2 The conditional covariance matrix Σ t = ΛV t Λ + U t involves a rotation-invariant transformation of Λ.
shrinkage towards zero, while choosing a i large imposes little shrinkage. For more elaborate discussions on Bayesian shrinkage in general and the effect of a i specifically, see Griffin and Brown (2010) and Polson and Scott (2011). Note that the Bayesian Lasso prior (Park and Casella 2008) arises as a special case when a i = 1.
One can see prior (4) as row-wise shrinkage with element-wise adaption in the sense that all variances in row i can be thought of as "random effects" from the same underlying distribution.
In other words, each series has high and a priori independent mass not to load on any factors and thus can be thought of as series-specific shrinkage. For further aspects on introducing hierarchical prior structure via the Normal-Gamma distribution, see Griffin and Brown (2017) and . Analogously, it turns out to be fruitful to also consider column-wise shrinkage with element-wise adaption, i.e.
This means that each factor has high and a priori independent mass not to be loaded on by any series and thus can be thought of as factor-specific shrinkage.
Statistical Inference
There are a number of methods to estimate factor SV models such as quasi-maximum likelihood (e.g. Harvey et al. 1994), simulated maximum likelihood (e.g. Liesenfeld and Richard 2006;Jungbacker and Koopman 2006), and Bayesian MCMC simulation (e.g. Pitt and Shephard 1999;Aguilar and West 2000;Chib et al. 2006;Han 2006). For high dimensional problems of this kind, Bayesian MCMC estimation proves to be a very efficient estimation method because it allows simulating from the high dimensional joint posterior by drawing from lower dimensional conditional posteriors.
MCMC Estimation
One substantial advantage of MCMC methods over other ways of learning about the posterior distribution is that it constitutes a modular approach due to the conditional nature of the sampling steps. Consequently, conditionally on the matrix of variances τ = (τ ij ) 1≤i≤m; 1≤j≤r , we can adapt the sampling steps of . For obtaining draws for τ , we follow Griffin and Brown (2010). The MCMC sampling steps for the factor SV model are: 1. For factors and idiosyncratic variances, obtain m conditionally independent draws of the idiosyncratic log-volatilities from h i |y i· , Λ i· , f , µ i , φ i , σ i and their parameters from Similarly, perform r updates for the factor logvolatilities from h m+j |f m+j,· , φ m+j , σ m+j and their parameters from φ m+j , σ m+j |f m+j,· , h m+j for j = 1, . . . , r. This amounts to m + r univariate SV updates. 3 2a. Row-wise shrinkage only: For i = 1, . . . , m, sample from wherer = min(i, r) if the loadings matrix is restricted to have zeros above the diagonal andr = r in the case of an unrestricted loadings matrix. For i = 1, . . . , m and j = 1, . . . ,r, draw from τ 2 2b. Column-wise shrinkage only: For j = 1, . . . , r, sample from wherej = j if the loadings matrix is restricted to have zeros above the diagonal and j = 1 otherwise. For j = 1, . . . , r and i =j, There is a vast body of literature on efficiently sampling univariate SV models. For the paper at hand, we use R package stochvol (Kastner 2016). 4 The Generalized Inverse Gaussian distribution GIG(m, k, l) has a density proportional to x m−1 exp − 1 2 (kx + l/x) . To draw from this distribution, we use the algorithm described in Hörmann and Leydold (2013) which is implemented in the R package GIGrvg (Leydold and Hörmann 2017). r = 0 r = 1 r = 5 r = 10 r = 20 r = 50 plain R, m the ith normalized observation vector and is the T ×r design matrix. This constitutes a standard Bayesian regression update.
3*. When inference on the factor loadings matrix is sought, optionally redraw Λ using deep interweaving ) to speed up mixing. This step is of less importance if one is interested in the (predictive) covariance matrix only.
Draw the factors from
Hereby,ỹ t = (y 1t e −h 1t /2 , . . . , y mt e −hmt/2 ) denotes the normalized observation vector at time t and is the m × r design matrix. This constitutes a standard Bayesian regression update.
The above sampling steps are implemented in an efficient way within the R package factorstochvol (Kastner 2017). Table 1 displays the empirical run time in milliseconds per MCMC iteration. Note that using more efficient linear algebra routines such as Intel MKL leads to substantial speed gains only for models with many factors. To a certain extent, computation can further be sped up by computing the individual steps of the posterior sampler in parallel.
In practice, however, doing so is only useful in shared memory environments (e.g. through multithreading/multiprocessing) as the increased communication overhead in distributed memory environments easily outweighs the speed gains.
Prediction
Given draws of the joint posterior distribution of parameters and latent variables, it is in principle straightforward to predict future covariances and consequently also future observations.
This gives rise to the predictive density (Geweke and Amisano 2010), defined as As with most quantities of interest in Bayesian analysis, computing the predictive density can be challenging because it constitutes an extremely high-dimensional integral which cannot be solved analytically. However, it may be approximated at a given "future" point y f through Monte Carlo integration, where κ (k) [1:t] denotes the kth draw from the posterior distribution up to time t. If (6) is evaluated at y f = y o t+1 , it is commonly referred to as the (one-step-ahead) predictive likelihood at time t + 1, denoted PL t+1 . Also, draws from (5) can straightforwardly be obtained by generating values y (k) t+1 from the distribution given through the (in our case multivariate Gaussian) density For the model at hand, two ways of evaluating the predictive likelihood particularly stand out.
First, one could average over k = 1, . . . , K densities of To obtain PL t+1 , we suggest to average over k = 1, . . . , K densities of [1:t] . This form of the predictive likelihood is obtained by analytically performing integration in (5) with respect to f t+1, [1:t] . Consequently, it is numerically more stable, irrespectively of the number of factors r. However, it requires a full m-variate Gaussian density evaluation for each k and is thus computationally much more expensive. To a certain extent, the computational burden can be mitigated by using the Woodbury matrix identity, . This substantially speeds up the repetitive evaluation of the multivariate Gaussian distribution if r m.
We apply these results for comparing competing models A and B between time points t 1 and t 2 and consider cumulative log predictive Bayes factors defined through log BF t 1 , where PL t (A) and PL t (B) denote the predictive likelihood of model A and B at time t, respectively. When the cumulative log predictive Bayes factor is greater than 0 at a given point in time, there is evidence in favor of model A, and vice versa.
Thereby, data up to time t 1 is regarded as prior information and out-of-sample evaluation starts at time t 1 + 1.
Simulation Studies
The aim of this section is to apply the model to a simulated data set in order to illustrate the shrinkage properties of the Normal-Gamma prior for the factor loadings matrix elements. For this purpose, we first illustrate several scenarios on a single ten dimensional data set. Second, we investigate the performance of our model in a full Monte Carlo simulation based on 100 simulated data sets. Third, and finally, we investigate to what extend these results carry over to higher dimensions.
In what follows, we compare five specific prior settings. Setting 1 refers to the usual standard Gaussian prior with variance τ 2 ij ≡ τ 2 = 1 and constitutes the benchmark. Setting 2 is the row-wise Bayesian Lasso where a i = 1 for all i. Setting 3 is the column-wise Bayesian Lasso where a j = 1 for all j. Setting 4 is the Normal-Gamma prior with row-wise shrinkage where Turning towards the zero loadings, the strongest shrinkage is introduced by both variants of the Normal-Gamma prior, followed by the different variants of the Bayesian Lasso and the standard Gaussian prior. This is particularly striking for the loadings on the superfluous third factor. The difference between row-and column-wise shrinkage for the Lasso variants can most clearly be seen in row 9 and column 3, respectively. The row-wise Lasso captures the "zero-row" 9 better, while the column-wise Lasso captures the "zero-column" 3 better.
Because of the increased element-wise shrinkage of the Normal-Gamma prior, the difference between the row-wise and the column-wise variant are minimal.
In the context of covariance modeling, however, factor loadings can be viewed upon as a mere means to parsimony, not the actual quantity of interest. Thus, Figure 2 Table 2: Estimated log Bayes factors at t = 1500 against the 2-factor model using a standard Gaussian prior, where data up to t = 1000 is treated as the training sample. Lines 1 to 5 correspond to cumulative 1-day-ahead Bayes factors, lines 6 to 10 correspond to 10-days-ahead predictive Bayes factors.
intervals are tighter.
To conclude, we briefly examine predictive performance by investigating cumulative log predictive Bayes factors. Thereby, the first 1000 points in time are treated as prior information, then 1-day-and 10-days-ahead predictive likelihoods are recursively evaluated until t = 1500.
Medium Dimensional Monte Carlo Study
For a more comprehensive understanding of the shrinkage effect, the above study is repeated for 100 different data sets where all latent variables are generated randomly for each realization. In Table 3, the medians of the respective relative RMSEs (root mean squared errors, averaged over time) between the true and the estimated pairwise correlations are depicted.
The part above the diagonal represents the relative performance of the row-wise Lasso prior (setting 2) with respect to the baseline prior (setting 1), the part below the diagonal represents the relative performance of the row-wise Normal-Gamma prior (setting 4) with respect Table 3: Relative RMSEs of pairwise correlations. Above the diagonal: Row-wise Lasso (a i = 1) vs. benchmark standard Gaussian prior with geometric means (first row). Below the diagonal: Row-wise Normal-Gamma (a i = 0.1) vs. row-wise Lasso prior (a i = 1) with geometric means (last row). Numbers greater than one mean that the former prior performs better than the latter. Hyperhyperparameters are set to c i = d i = 0.001. All values reported are medians of 100 repetitions.
to the row-wise Lasso prior (setting 2). Clearly, gains are highest for series 9 which is by construction completely uncorrelated to the other series. Additionally, geometric averages of these performance indicators are displayed in the first row (setting 2 vs. baseline) and in the last row (setting 4 vs. baseline). They can be seen as the average relative performance of one specific series' correlation estimates with all other series.
To illustrate the fact that extreme choices of c i and d i are crucial for the shrinkage effect of the Bayesian Lasso, Table 4 displays relative RMSEs for moderate hyperparameter choices c i = d i = 1. Note that the performance of the Bayesian Lasso deteriorates substantially while performance of the Normal-Gamma prior is relatively robust with regard to these choices. This indicates that the shrinkage effect of the Bayesian Lasso is strongly dependent on the particular choice of these hyperparameters (governing row-wise shrinkage) while the Normal-Gamma can adapt better through increased element-wise shrinkage. Table 5 which lists RMSEs and MAEs for all prior settings, averaged over the non-trivial correlation matrix entries as well as time. Note again that results under the Lasso prior are sensitive to the particular choices of the global shrinkage hyperparameters as well as the choice of row-or column-wise shrinkage, which is hardly the case for the Norma-Gamma prior. Interestingly, the performance gains achieved through shrinkage prior usage are higher when absolute errors are considered. This is coherent with the extremely high kurtosis of Normal-Gamma-type 1 2 3 4 5 6 7 8 9 10 Average 1 1.00 1.00 1.01 1.01 1.01 1.01 1.01 1.00 1.02 1.00 1 1.00 1.00 1.01 1.00 1.00 1.01 1.00 1.01 1.00 2 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.01 1.00 3 1.00 1.01 1.01 1.00 1.00 1.02 1.00 1.03 1.00 4 3.06 1.00 2.99 1.02 1.02 1.00 1.00 1.01 1.02 5 1.00 1.01 1.00 3.02 1.00 1.02 1.00 1.03 1.00 6 1.00 1.01 1.00 2.93 1.00 1.02 1.00 1.04 1.00 7 3.25 1.00 2.98 1.00 2.90 2.72 1.00 1.02 1.02 8 1.00 1.00 1.00 1.01 1.01 1.00 1.01 1.02 1.00 9 3.51 3.07 3.28 3.19 3.40 3.37 3.28 3.03 1.02 10 1.00 1.01 1.00 2.92 1.00 1.00 2.98 1.01 3.32 Average 2 1.49 1.14 1.43 2.11 1.43 1.42 2.07 1.14 3.37 1.44 Table 4: Relative RMSEs of pairwise correlations. Above the diagonal: Row-wise Lasso (a i = 1) vs. benchmark standard Gaussian prior with geometric means (first row). Below the diagonal: Row-wise Normal-Gamma (a i = 0.1) vs. row-wise Lasso prior (a i = 1) with geometric means (last row). Numbers greater than one mean that the former prior performs better than the latter. Hyperhyperparameters are set to c i = d i = 1. All values reported are medians of 100 repetitions.
An overall comparison of the errors under different priors is provided in
priors which, while placing most mass around zero, allow for large values.
High Dimensional Monte Carlo Study
The findings are similar if dimensionality is increased; in analogy to above, we report overall RMSEs and MAEs for 495 000 pairwise correlations, resulting from m = 100 component series at T = 1000 points in time. The factor loadings for the r = 10 factors are again randomly sampled with 43.8% of the loadings being equal to zero, resulting in about 2.6% of the pairwise correlations being zero. Using this setting, 100 data sets are generated; for each of these, a separate (overfitting) factor SV model using r = 11 factors without any prior restrictions on the factor loadings matrix is fit. The error measures are computed and aggregated. Table 6 reports the medians thereof. In this setting, the shrinkage priors outperform the standard Gaussian prior by a relatively large margin; the effect of the specific choice of the global shrinkage hyperparameters is less pronounced.
Application to S&P 500 Data
In this section we apply the SV factor model to stock prices listed in the Standard & Poor's 500 index. We only consider firms which have been continuously included in the index from The presentation consists of two parts. First, we exemplify inference using a multivariate stochastic volatility model and discuss the outcome. Second, we perform out-of-sample predictive evaluation and compare different models. To facilitate interpretation of the results discussed in this section, we consider the GICS 5 classification into 10 sectors listed in Table 7. We run our sampler employing the Normal-Gamma prior with row-wise shrinkage for 110 000 draws and discard the first 10 000 draws as burn-in. 6 Of the remaining 100 000 draws every 10th draw is kept, resulting in 10 000 draws used for posterior inference. Hyperparameters are set as follows: a i ≡ a = 0.1, c i ≡ c = 1, d i ≡ d = 1, b µ = 0, B µ = 100, a 0 = 20, b 0 = 1.5, B σ = 1, B m+j = 1 for j = 1, . . . , r. To prevent factor switching, we set all elements above the diagonal to zero. The leading series are chosen manually after a preliminary unidentified run such that series with high loadings on that particular factor (but low loadings on the other factors) become leaders. Note that this intervention (which introduces an order dependency) is only necessary for interpreting the factor loadings matrix but not for covariance estimation or prediction. Concerning MCMC convergence, we observe excellent mixing for both the covariance as well as the correlation matrix draws. To exemplify, trace plots of the first 1000 draws after burn-in and thinning for posterior draws of the log determinant distribution of the covariance and correlation matrices at t = T are displayed in Figure 3. The corresponding factor log variances are displayed in Figure 6. Apart from featuring similar low-to medium frequency properties, each process exhibits specific characteristics. First, notice the sharp increase of volatility in early 2010 which is mainly visible for the "overall" factor 1. The second factor (Utilities) displays a pre-crisis volatility peak during early 2008.
The third factor, driven by Energy and Materials, shows relatively smooth volatility behavior while the fourth factor, governed by the Financial, exhibits a comparably "nervous" volatility evolution. Higher correlation can be spotted throughout, both within sectors but also between sectors.
There are only few companies that show little and virtually no companies that show no correlation with others. Another two years later, we again see a different overall picture.
Lower correlations throughout become apparent with moderate correlations remaining within the sectors Energy, Utilities, and in particular Financials. The matrix has been rearranged to reflect the different GICS sectors in alphabetical order, cf. Table 7.
Predictive Likelihoods for Model Selection
Even for univariate volatility models evaluating in-or out-of-sample fit is not straightforward because the quantity of interest (the conditional standard deviation) is not directly observable.
While in lower dimensions this issue can be circumvented to a certain extent using intraday data and computing realized measures of volatility, the difficulty becomes more striking when the dimension increases. Thus, we focus on iteratively predicting the observation density out-of-sample which is then evaluated at the actually observed values. Because this approach involves re-estimating the model for each point in time, it is computationally costly but can be parallelized in a trivial fashion on multi-core computers.
For the S&P 500 data set, we begin by using the first 3000 data points (until 5/2/2006) to estimate the one-day-ahead predictive likelihood for day 3001 as well as the ten-day-ahead predictive likelihood for day 3010. In a separate estimation procedure, the first 1001 data points (until 5/3/2006) are used to estimate the one-day-ahead predictive likelihood for day 3002 and the corresponding ten-day-ahead predictive likelihood for day 3011, etc. This procedure is repeated for 1990 days until the end of the sample is reached.
We Accumulated log predictive likelihoods for the entire period are displayed in Figure 8. Gains in predictive power are substantial up to around 8 factors with little difference for the two priors.
After this point, the benefit of adding even more factors turns out to be less pronounced. On the contrary, the effect of the priors becomes more pronounced. Again, while differences in scores tend to be muted for models with fewer factors, the benefit of shrinkage grows when r gets larger. Estimated marginal log predictive likelihoods for industry "overall" (n = 300, method = "new") q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 1−day−ahead, normal prior 1−day−ahead, NG prior 10−days−ahead, normal prior 10−days−ahead, NG prior Figure 8: Accumulated 1-day-and 10-days-ahead log predictive likelihoods for models with 0, 1, . . . , 20 factors. Data until t = 3000 is treated as training data.
While joint models with r > 0 outperform the marginal (no-factor) model for all points in time, days of particular turbulence particularly stand out. To illustrate this, we display average and top three log predictive gains over the no-factor model in worldwide. It appears that joint modeling of stock prices is particularly important on days of extreme events when conditional correlations are often higher.
Using Observable Instead of Latent Factors
An alternative to estimating latent factors from data is to use observed factors instead (cf. Wang et al. 2011 of the excess return on the market, a size factor SMB, and a book-to-market factor HML (Fama and French 1993); the momentum factor MOM (Carhart 1997) captures the empirically observed tendency for falling asset prices to fall further, and rising prices to keep rising. For estimation of this model we proceed exactly as before, except that we omit the last step of our posterior sampler and keep f fixed at the observed values.
Without presenting qualitative results in detail due to space constraints, we note that both the loadings on and the volatilities of the market excess returns show a remarkably close resemblance to those corresponding to the first latent factor displayed in Figures 5 and 6. To a certain extent (although much less pronounced), this is also true for SMB and the second latent factor as well as HML and the fourth latent factor. However, most of the loadings on MOM are shrunk towards zero and there is no recognizable similarity to the remaining latent factor from the original model. Log predictive scores for this model are very close to those for the SV model with two latent factors. In what follows, we term this approach FF+MOM.
Comparison to Other Models
We now turn to investigating the statistical performance of the factor SV model via out-ofsample predictive measures as well as its suitability for optimal asset allocation. The competi- where ι denotes an m-variate vector of ones. Using these weights, we compute the corresponding realized portfolio returns r t+1 for t = 3000, 3001, . . . , 3999, effectively covering an evaluation period from 5/3/2006 to 3/1/2010. In the first three columns of Table 9, we report annualized empirical standard deviations, annualized average excess returns over those obtained from the equal weight portfolio, and the quotient of these two measures, the Sharpe ratio (Sharpe 1966).
Considering the portfolio standard deviation presented in the first column of Table 9, it turns out that the Ledoit-Wolf shrinkage estimator implies an annualized standard deviation of about 12.5 which is only matched by factor SV models with many factors. Lower-dimensional factor SV models, including FF+MOM, as well as simple MAs and highly persistent EWMAs do not perform quite as well but are typically well below 20. Less persistent EWMAs, the no-factor SV model and the naïve equal weight portfolio exhibit standard deviations higher than 20. Considering average returns, FF+MOM and factor SV models with around 10 to 20 factors tend to do well for the given time span. In the third column, we list Sharpe ratios, where factor SV models with 10 to 20 factors show superior performance, in particular when the Normal-Gamma prior is employed. Note however that column two and three have to be interpreted with some care, as average asset and portfolio returns generally have a high standard error.
Second, we use what we coin pseudo log predictive scores (PLPSs), i.e. Gaussian approximations to the actual log predictive scores. This simplification is necessary because most of the above-mentioned methods only deliver point estimates of the forecast covariance matrix and it is not clear how to properly account for estimation uncertainty. Moreover, the PLPS is simpler to evaluate as there is no need to numerically solve a high-dimensional integral. Consequently, it is frequently used instead of the actual LPS in high dimensions while still allowing for evaluation of the covariance accuracy (Adolfson et al. 2007;Carriero et al. 2016;Huber 2016). More specifically, we use data up to time t to determine a point estimateΣ t+1 for Σ t+1 and compute the logarithm of the multivariate Gaussian density N m 0,Σ t+1 evaluated at the actually observed value y o t+1 to obtain the one-day-ahead PLPS for time t + 1.
In terms of average PLPSs (the last column in Table 9), factor SV models clearly outperform all other models under consideration. In particular, even if r is chosen as small as r = 4 to match the number of factors, the model with latent factors outperforms FF+MOM. Using latent factors is generally preferable; note however that the 4-factor FF+MOM does better than the single-and no-factor SV models. Generally speaking, many factors appear to be needed for accurately representing the underlying data structure, irrespectively of the prior choice. Considering the computational simplicity of the Ledoit-Wolf estimator, its prediction accuracy is quite remarkable. It clearly outperforms the no-factor SV model which, in turn, beats simple MAs and EWMAs.
Conclusion and Outlook
The aim of this paper was to present an efficient and parsimonious method of estimating highdimensional time-varying covariance matrices through factor stochastic volatility models. We did so by proposing an efficient Bayesian MCMC algorithm that incorporates parsimony by modeling the covariance structure through common latent factors which themselves follow univariate SV processes. Moreover, we added additional sparsity by utilizing a hierarchical shrinkage prior, the Normal-Gamma prior, on the factor loadings. We showed the effectiveness of our approach through simulation studies and illustrated the effect of different shrinkage specifications. We applied the algorithm to a high-dimensional data set consisting of stock returns of 300 S&P 500 members and conducted an out-of-sample predictive study to compare different prior settings and investigate the choice of the number of factors. Moreover, we discussed the out-of-sample performance of a minimum variance portfolio constructed from the model-implied weights and related it to a number of competitors often used in practice.
Because the algorithm scales linearly in both the series length T as well as the number of component series m, applying it to even higher dimensions is straightforward. We have experimented with simulated data in thousands of dimensions for thousands of points in time and successfully recaptured the time-varying covariance matrix.
Further research could be directed towards incorporating prior knowledge into building the hierarchical structure of the Normal-Gamma prior, e.g. by choosing the global shrinkage parameters according to industry sectors. Alternatively, Villani et al. (2009) propose a mixture of experts model to cater for smoothly changing regression densities. It might be fruitful to adopt this idea in the context of covariance matrix estimation by including either observed (Fama-French) or latent factors as predictors and allowing for other mixture types than the ones discussed there. While not being the focus of this work, it is easy to extend the proposed method by exploiting the modular nature of Markov chain Monte Carlo methods. In particular, it is straightforward to combine it with mean models such as (sparse) vector autoregressions (e.g., Bańbura et al. 2010;Kastner and Huber 2017), dynamic regressions (e.g., Korobilis 2013), or time-varying parameter models (e.g., Koop and Korobilis 2013;). | 9,262.4 | 2016-08-30T00:00:00.000 | [
"Computer Science"
] |
Room-temperature sub-100 nm Néel-type skyrmions in non-stoichiometric van der Waals ferromagnet Fe3-xGaTe2 with ultrafast laser writability
Realizing room-temperature magnetic skyrmions in two-dimensional van der Waals ferromagnets offers unparalleled prospects for future spintronic applications. However, due to the intrinsic spin fluctuations that suppress atomic long-range magnetic order and the inherent inversion crystal symmetry that excludes the presence of the Dzyaloshinskii-Moriya interaction, achieving room-temperature skyrmions in 2D magnets remains a formidable challenge. In this study, we target room-temperature 2D magnet Fe3GaTe2 and unveil that the introduction of iron-deficient into this compound enables spatial inversion symmetry breaking, thus inducing a significant Dzyaloshinskii-Moriya interaction that brings about room-temperature Néel-type skyrmions with unprecedentedly small size. To further enhance the practical applications of this finding, we employ a homemade in-situ optical Lorentz transmission electron microscopy to demonstrate ultrafast writing of skyrmions in Fe3-xGaTe2 using a single femtosecond laser pulse. Our results manifest the Fe3-xGaTe2 as a promising building block for realizing skyrmion-based magneto-optical functionalities.
This paper is well written and presented a new phase, Fe-deficiency Fe3-xGaTe2, observing Néel-type skyrmions.I recommend for publication with minor revision.
-In terms of Fe composition, EDS is never a good quantitative way to determine the composition.The accuracy will be improved if the composition of perfect Fe3GaTe2 crystal flake is used as reference to compare with the Fe-deficient flake using the same experimental conditions.
-The space group changed to non-inversion symmetry based on the XRD and HAADF-STEM image.It will be more convincing if additional data using CBED to confirm the space group.
-The Fe columns in Fig. 1f are not really clear.The atomic columns do not look clean.A better clear HAADF-STEM image will be better that shows better Fe Columns.Would the author claim that the Fe deficiency occurs only at those FeII positions?-Can the author control the Fe content?
Reviewer #2 (Remarks to the Author): Reviewer comments on "Room-temperature sub-100 nm Néel-type skyrmions in nonstoichiometric van der Waals ferromagnet Fe3-xGaTe2 with ultrafast laser writability" In this work, the authors demonstrate the presence of Néel skyrmions in FGT due to a DMI tied to broken inversion symmetry from Fe vacancies.Additionally, the authors demonstrate the ability to drive the phase change optically, by locally heating the sample in-situ with a 520nm pulsed laser during LTEM measurement.
While the material parameters are marginally higher than previously reported in 2D vdW skyrmion systems, it seems to me that the novelty of this work comes from the mechanism/crystallography, which should be better described.The physics of the material is the new discovery here, as the material properties are almost the same as existing systems.I think this work needs to be more motivated by, and discuss, the physics of the system, rather than just experimental observations.In, for example, (Fe0.5Co0.5)3GeTe2: • The presence of Neel skyrmions and THE of approximately the same value has previously been reported (Zhang et al.Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci. Adv.8,eabm7103(2022).) • The ordering of the skyrmion lattice has previously been reported (Meisenheimer et al.Ordering of room-temperature magnetic skyrmions in a polar van der Waals magnet.Nat Commun 14, 3744 (2023).) • A similar phase diagram and thickness dependence has been reported (Zhang et al.Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci. Adv.8,eabm7103(2022).) • Approximately the same Curie temperature and similar mechanism (ordering of empty Fe sites) has been reported (Zhang et al.A room temperature polar magnetic metal.Phys.Rev. Materials 6, 044403 (2022).)And while the in-situ measurement is new, its value would come from actual dynamical measurements-by doing just quasistatic quenching, it doesn't seem like there is functionally any difference between just T,B cycling without the optical pump (Meisenheimer et al.Ordering of room-temperature magnetic skyrmions in a polar van der Waals magnet.Nat Commun 14, 3744 (2023), Zhang et al.Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci. Adv.8,eabm7103(2022)., Zhang et al.Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci. Adv.8,eabm7103(2022).)Especially so because you motivate the experiment from the perspective of "ultrafast writing of skyrmions" (L 104, L 313).
More specifically, there needs to be more discussion on the mechanism of the DMI.A global parameter implies that the empty Fe sites are ordering?DFT is used to simulate the value of DMI, but is the relaxed structure comparable?Does this value match with what is measured (does it give skyrmions/domains of similar size)?What does the anisotropy of the parent compound look like?
Having an in-plane P, to my knowledge, separates this from the existing work, but the parameters are ultimately largely the same?How does the directionality of B change the shape of the skyrmions?It seems like the interaction with D should be unique.
Why is there such a large variance in the sizes of the skyrmions?this is also different to what is generally reported.In fact, it almost looks bimodal in many images.
There needs to be more interpretation of the results and tying back to a structure-property relation for me to be comfortable recommending this paper.
Smaller notes: You mix cubic and hexagonal coordinates-since the system is hexagonal, you should make this consistent (e.g.[0001] instead of [001], L 138) You need to soften some statements in the introduction-e.g.rapid thermal annealing is not going to "revolutionize skyrmion logic" (L103), these processes have been around for a while and, additionally, are not particularly chip compatible.
Reviewer #3 (Remarks to the Author):
General Comment: The authors investigate a non-stoichiometric room temperature magnet Fe2.86GaTe2 crystal where the Fe vacancies induce the formation of DMI by spatial inversion symmetry breaking.Such an in-plane isotropic DMI brings about RT Néel-type skyrmions, and the size of the skyrmions can be regulated by the sample thickness and the external magnetic field.The dynamic writing process of RT skyrmions in Fe2.86GaTe2 flakes enhances the potential application of spintronic devices.The paper is timely and of interest.
Referee A's Comment 1:
In terms of Fe composition, EDS is never a good quantitative way to determine the composition.The accuracy will be improved if the composition of perfect Fe3GaTe2 crystal flake is used as reference to compare with the Fe-deficient flake using the same experimental conditions.
Author's reply: We highly appreciate the referee's suggestions to enhance the accuracy of composition determination.In order to obtain fully stoichiometric Fe3GaTe2, we have systematically grown a series of Fe3-xGaTe2 single crystals by varying the Fe content in the raw material composition, utilizing a Te-flux method.Subsequently, comprehensive energy-dispersive X-ray spectroscopy (EDS) mapping was conducted on the cleaved surfaces of these crystals to determine their chemical composition.To ensure the reliability of the EDS results, mapping was carried out at four distinct areas for each sample.Utilizing the Ga ratio as the normalization factor and ignoring the variation of Te content, their corresponding chemical composition were established.Table R1 provides a comprehensive overview of the raw material composition and the final crystal composition.It is clearly observed that when the raw Fe ratios fall below 0.8, the Fe3-xGaTe2 phase cannot be formed.Instead, a mixture of phases, including GaTe and Ga2Te3, is produced.When the raw Fe ratios are equal to or greater than 0.9, Fe3-xGaTe2 single crystals are crystallized, with the Fe content in these single crystals increasing proportionally with the raw Fe ratios.However, an increase in the raw Fe ratios to 1.3 results in the formation of FeTe phases.As surmised in Table R1, we find that the chemical formulas for the crystals with the minimum and maximum Fe content correspond to Fe2.84±0.05GaTe2and Fe2.96±0.02GaTe2,respectively.This result indicates that Fe vacancies always exist in the single crystals synthesized using a Te-flux method.
The formula of the later one, i.e.Fe2.96±0.02GaTe2, is quite close to that of the perfect stoichiometric Fe3GaTe2 crystal, and can be regarded as the reference to improve the accuracy suggested by the referee.In the previous version of our manuscript, we reported the observation of Néel-type skyrmions in the Fe3-xGaTe2 single crystals synthesized with a raw Fe ratio of 0.9.To highlight the existence of Fe vacancies, the chemical formula was denoted as Fe2.86GaTe2, which corresponds to the minimum Fe content determined by the EDS mapping.In the revised manuscript, to enhance the accuracy of the chemical formula, an error bar has been added by summarizing the EDX results obtained at different areas, and the chemical formula is denoted as Fe2.84±0.05GaTe2(see Page 3, Lines 118-121 in the main text).Simultaneously, in accordance with the referee's suggestion, the EDX results of the crystals with other Fe content and associated analysis are also presented in Supplementary Note 1 and Table S1 for reference.
Table R1.Summary of the raw material composition and the final product for the growth of Fe3-xGaTe2 samples using the self-flux method.Author's reply: We agree with the referee that Convergent Beam Electron Diffraction (CBED) is an efficient approach to characterize the non-inversion symmetry of the crystal structure.Following the referee's suggestion, we have performed CBED measurements with the electron beam injected along the [112 ̅ 0] zone axis, which is the only zone axis that allows for directly observing significant displacement of Feii columns.However, the obtained diffraction disks are seriously overlapped (as shown in Fig. R1a), despite various optimizations, including tuning the camera length, electron beam size, and beam-convergence angle α.Therefore, instead of utilizing CBED, we have performed selected area electron diffraction (SAED) measurements along both the [112 ̅ 0] and [101 ̅ 0] zone axes to confirm the non-inversion structural symmetry, as shown in Figs.R2a and R2b.It is clearly observed that the SAED patterns exhibit a series of (000l) diffraction patterns, such as (0005 ̅ ) , (0001 ̅ ) , ( 0003) and (0007) .
However, the simulations demonstrate that the odd l (l = 2n + 1) values of (000) diffractions are allowed for the non-centrosymmetric space group P3m1, but forbidden in a centrosymmetric space group P63/mmc.To further confirm the differences, we simulated SAED patterns based on the XRD refined non-centrosymmetric structure of Fe3-xGaTe2 (space group P3m1, with Feii deviation), which align exceptionally well with the experimental results (Fig. R2c and R2d).Instead, a prefect centrosymmetric structure of Fe3GaTe2 (space group P63/mmc, without Feii deviation), lacks (000l) and (hh2h ̅̅̅ l) diffraction patterns for the odd values of l (Fig. R2c and R2d).R4a . For a quantitative determination of the deviation of the Feii atoms, we focused on the region marked by the blue rectangles (left panel of Fig. R4a) comprising Te-Feii-Te atoms.We then vertically integrated the corresponding imaging intensity line profile (Fig. R4b).By referencing the center of the two Te atoms, the deviation of the Feii atom towards the c direction was determined to be 0.20 Å. Utilizing the same procedure, we surveyed an area of 2 × 17 unit cells, yielding an average Feii deviation of 0.16 0.06 Å.
Additionally, we observed that the image intensity of Fei-a above Feii is weaker than that of Fei-b below Feii, as evident in the imaging intensity line profile of Fei-a-Fei-b atoms in Fig. R4c.Since imaging intensity is generally proportional to the number of projected atoms [PNAS 107.26 (2010): 11682-11685], the contrast difference between Fei-a and Fei-b indicates asymmetric site occupations, suggesting a small quantity of Fe vacancies in the Fei-a site.In the previous version of our manuscript, we emphasized the Fe deficiency at Feii sites with an occupancy ratio of 0.8467.Upon re-evaluating the results of the refined single-crystal XRD, we discovered that the lower Fei-b sites are nearly fully occupied, while the higher Fei-a sites have an occupancy ratio of 0.9688.
Thus, the Fe deficiency occurs not only at the Feii positions, but also at some Fei-a positions.
In the revised manuscript, we have added the associated discussions on Feii deviations, Feii vacancies and the asymmetric Fei vacancies (refer to Page 7, Lines 187-198 and Page 6, Lines 154-159 in the main text) into the main text and Supplementary Figs. S4 and S5, as also shown below.
"However, the HAADF image along the [112 ̅ 0] zone axis (Fig. 1f) reveals that Feii atoms deviate clearly from the center position of the Te slices along the c-axis, which is also supported by the annular bright-field (ABF-STEM) image in Fig. S4.By referencing the center of the two Te atoms in a magnified ABF-STEM image (Fig. S5), an averaged Feii deviation is calculated as 0.16 ± 0.06 Å over an area of 2 × 17 unit cells (see Supplementary Note 2)." "In comparison to the stoichiometric Fe3GaTe2 with a centrosymmetric structure, the presence of Fe deficiency in Fe2.84±0.05GaTe2should exert a pivotal influence on the Feii deviation for the asymmetric structure.Our refined single-crystal XRD indicates that Fe deficiency is predominantly concentrated at the Feii positions with an occupancy ratio of 0.8467.Additionally, the upper-layer Fei-a sites have an occupancy ratio of 0.9688, while the under-layer Fei-b sites are nearly fully occupied (Fig. 1h).As observed in the line profile of Fei-a and Fei-b atoms in the ABF-STEM image (Fig. S5c), it is apparent that the image intensity of Fei-a above Feii is weaker than that of Fei-b below Feii.Since the ABF imaging intensity is generally proportional to the number of projected atoms 41 , the contrast difference between Fei-a and Fei-b indicates asymmetric site occupations, suggesting a small quantity of Fe vacancies in the Fei-a site, which is consistent with the results of single-crystal XRD." Author's reply: We sincerely appreciate the insightful comments provided by the referee.Following the comments, we have further systematically grown a series of Fe3-xGaTe2 single crystals by varying the Fe content in the raw material composition, utilizing a Te-flux method.A comprehensive overview of the raw material composition and the final product is outlined in Table R1 (see Author's reply to Referee A' Comment 1).It is clearly demonstrated that controlling the Fe content in the final crystals is achievable by varying the raw Fe ratio.Specifically, when the raw Fe ratio falls below 0.8, the Fe3-xGaTe2 phase cannot be formed.Instead, a mixture of phases, including GaTe and Ga2Te3 phases, is produced.In contrast, when the raw Fe ratio is equal to or greater than 0.9, Fe3-xGaTe2 single crystals can be crystallized, with the Fe content in these single crystals increasing proportionally with the raw Fe ratio.However, an increase in the raw Fe ratio to 1.3 results in the formation of FeTe phase.In the revised manuscript, we have incorporated discussions on controlling the Fe content of the crystals into both the main text (refer to Page 5, Lines 110-120) and Supplementary information (refer to Supplementary Note 1 and Table S1), providing a more comprehensive understanding of the growth conditions and their impact on the final composition of the single crystals.
"In order to control the Fe content, we systematically grew a series of Fe3-xGaTe2 single crystals by varying the Fe content in the raw material composition, utilizing a Te-flux method (see Methods section and Supplementary Note 1).To determine the chemical composition of the as-grown crystals, energy dispersive X-ray spectroscopy (EDX) analyses were conducted on the surfaces of Fe3-xGaTe2 nanoflakes (Fig. 1a) that were exfoliated and placed onto the Si3N4 membrane (see Methods).The ratio of raw materials and the corresponding final crystal composition are listed in Table S1, Supplementary Fig. S1 and Fig. 1b.We found that the Fe deficiencies always exist in these crystals, while the minimum and maximum Fe contents correspond to Fe2.84±0.05GaTe2and Fe2.96±0.02GaTe2,respectively.This result implies the feasibility of inducing Fe deficiency in the samples.To highlight the existence of Fe vacancies, the subsequent studies were focused on the minimum Fe content sample Fe2.84±0.05GaTe2." Response to the Report of Referee B Referee B's General Comment: In this work, the authors demonstrate the presence of Néel skyrmions in FGT due to a DMI tied to broken inversion symmetry from Fe vacancies.Additionally, the authors demonstrate the ability to drive the phase change optically, by locally heating the sample in-situ with a 520 nm pulsed laser during LTEM measurement.While the material parameters are marginally higher than previously reported in 2D vdW skyrmion systems, it seems to me that the novelty of this work comes from the mechanism/crystallography, which should be better described.The physics of the material is the new discovery here, as the material properties are almost the same as existing systems.I think this work needs to be more motivated by, and discuss, the physics of the system, rather than just experimental observations.
In, for example, (Fe0.5Co0.5)3GeTe2: • The presence of Neel skyrmions and THE of approximately the same value has previously been reported (Zhang et al.Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci.Adv.8,eabm7103(2022).) • The ordering of the skyrmion lattice has previously been reported (Meisenheimer et al.Ordering of room-temperature magnetic skyrmions in a polar van der Waals magnet. Nat Commun 14, 3744 (2023).) • A similar phase diagram and thickness dependence has been reported (Zhang et al. Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci. Adv.8,eabm7103(2022).) • Approximately the same Curie temperature and similar mechanism (ordering of empty Fe sites) has been reported (Zhang et al.A room temperature polar magnetic metal. Phys.Rev. Materials 6, 044403 (2022).) Author's reply: We sincerely appreciate the referee's comments regarding more indepth discussion on the mechanism/crystallography.These valuable comments are of great significance to improve the quality of our manuscript.In the last version of our manuscript, our primary focus is on reporting that the Feii vacancies in Fe3-xGaTe2 induces the displacement of Feii atoms, leading to the breaking of crystal inversion symmetry.These vacancies serve as the source of the Dzyaloshinskii-Moriya interaction, which not only contributes significantly to a room-temperature topological Hall effect but also facilitates the formation of small-sized Néel-type skyrmions with fs laser writability.Following the referee's comments, we have made a major revision to reinforce the discussion on the mechanism and crystallography.The key points of our accomplishments are outlined below, with more comprehensive discussions provided in subsequent responses to Referee B's Comments 2. We hope the referee will be satisfied with the revised manuscript as well as our responses.
(i) To understand the underlying physics for symmetry breaking on Feii sites, we have first conducted a thorough examination, employing improved ABF-STEM images and re-evaluating the single crystal XRD data.Our investigation has revealed that Fe deficiency is predominantly concentrated at the Feii positions.Additionally, a minor presence of asymmetric Fe deficiency at the Fei positions was observed, wherein the upper-layer Fei-a sites exhibit a higher degree of deficiency compared to the lower-layer Fei-b sites.
(ii) Upon establishing the crystal structure, we have conducted structure relaxation based on DFT calculations, considering Fei-a and Feii vacancies separately.Our computational results confirm that the asymmetric Fei-a vacancy primarily induces Feii deviation towards the c direction, which results in the symmetry breaking of the crystal, whereas the Feii vacancy exerts no influence on Feii deviation.In further response to the referee's comments, we'd like to show that there is indeed a functional difference between the T-B cycling and our in-site fs laser quenching approach.Typically, without a magnetic field, zero-field cooling can only result in the formation of interconnected, relatively long stripe domains, but does not spontaneously lead to the creation of skyrmions (as show in Fig. R5a-c) [Nature Communications 13.1 (2022): 3035].However, through our measurements with varying laser pulse fluences, we have identified the possibility of achieving skyrmion writing under zero magnetic field conditions.As depicted in Fig. R5a and R5d, under the condition of a single laser pulse fluence of 1.3 mJ/cm 2 , the stripe domains within the yellow box merely exhibit domain wall movement after the laser pulse.In Fig. R5b and R5e, when we increase the single laser pulse fluence to 9.4 mJ/cm 2 , the stripe domains become narrower and shorter, with some regions breaking to form skyrmions (highlighted in the red box).However, as we increase the single laser pulse fluence to 11 mJ/cm 2 , stripe domains are formed without skyrmions (Fig. R5c and R5f).These in-situ laser fluencedependent experiments indicate that a hybrid state with coexisting stripes and skyrmions is achievable without magnetic field.Currently, all the reported articles on fs laser-induced skyrmion writing have required external magnetic field assistance Author's reply: We sincerely thank the referee for careful reading of our manuscript.
These valuable suggestions and comments are greatly helpful for us to explore a more in-depth physical picture on the mechanism of the DMI.In response to these comments, we have supplemented additional structural characterizations and first-principles calculations, revealing that the asymmetric Fei vacancies are the primary cause of the was used with electron-core interactions described by the projector augmented wave method for the pseudopotentials, and the exchange correlation energy calculated with the generalized gradient approximation of the Perdew-Burke-Ernzerhof (PBE) form.
The plane wave cutoff energy was 400 eV for all the calculations.In calculating the atomic shifts due to Fei-a and Feii vacancies, we used a 2×2 supercell and removed the bottommost Fei-a and Feii atoms.The Monckhorst-Pack scheme was used for the Γcentred 12 × 12 × 1 k-point sampling.All atoms' relaxations were performed until the force become smaller than 0.001 eV/A for determining the most stable geometries.
Response to "DFT is used to simulate the value of DMI, but is the relaxed structure comparable? Does this value match with what is measured (does it give
skyrmions/domains of similar size)?": The above ABF-STEM analysis has revealed the Fe deficiency is predominantly concentrated at the Feii positions, accompanied by a minor deficiency in asymmetric Fei-a positions.However, building such a vacancy model with exact occupancy ratio of Feii and Fei-a atoms requires a very large supercell structure, which is beyond the computation capability for DMI calculation.To reduce the computational complexity, the DFT calculation is based on the fixed crystal structure obtained from single-crystal XRD experiments, instead of a fully relaxed crystal structure optimized from DFT calculations.The reasons are described below: In considering the structural model in Fig. R8, the formation of DMI requires spatial inversion symmetry breaking in upper and lower triangles composed of Fei-Feii-Te atoms.In quantitative terms, the DMI vector can be expressed as where D is the DMI constant, uij represents the unit vector from Fei atom to Feii atom, and z represents the unit vector from magnetic Feii atom to heavy Te atom.It can be seen that if Feii is located at the center position of Te-Te atoms, where ever Fei-a or Feib are located, the upper D1 and lower D2 vectors would always cancel out with each other.This feature suggests that the spatial inversion symmetry breaking of Feii deviations is the primary cause of DMI, while Fei-a vacancies do not contribute DMI.
Thus, we can reasonably assume that atoms are fixed with full occupancy, and ignoring the steps of structure relaxation, which would otherwise necessitate the creation of impractically large supercells in the vacancy model.
In relaxations with fixed δ(Feii) were performed with Gaussian smearing until the forces become smaller than 0.001 eV/Å.Next, spin-orbit coupling was included in the calculation, and the total energy of the system was determined as a function of the spin configuration as shown in Fig. R9a, and d|| equals to (EACW − ECW)/12.The DMI constant D was calculated using the equation D = 3√2d/(N F a 2 ) , where NF is the number of atomic layers, a is the lattice constant and d|| represents DMI strength.In the second step, the EDIFF is set to 10 -8 eV and the tetrahedron method with Blöchl corrections was used to get an accurate total-energy.The resulting relationship between D and δc(Feii − Ga) is presented in Fig. R9b.
Responses to: "What does the anisotropy of the parent compound look like?"
In order to compare the anisotropy of different Fe-content samples, we systematically have grown a series of Fe3-xGaTe2 single crystals by varying the Fe content in the raw material composition, utilizing a Te-flux method.To ensure the reliability of the compositions, EDS mapping was carried out at four distinct cleaved surfaces of these crystals.As surmised in Table R2, the chemical formulas for the crystals with the minimum and maximum Fe content correspond to Fe2.84±0.05GaTe2and Fe2.96±0.02GaTe2,respectively.This result indicates that Fe vacancies always exist in the single crystals synthesized using a Te-flux method.The chemical formula of simulations.The initial zero-field skyrmion state was relaxed from a random state with 100 mT magnetic field.Employing the initial skyrmion state as the input, we systematically applied an in-plane magnetic field along y axis, and relaxed the magnetization to a stable state.
Referee B's Comment 4:
Why is there such a large variance in the sizes of the skyrmions?this is also different to what is generally reported.In fact, it almost looks bimodal in many images.
Author's reply: We thank the referee's comments on skyrmion size variance.In the LTEM experiments, we initially raised the sample temperature above Curie temperature, then cooled it to the target temperature with a 30 mT external magnetic field, and finally removed the external magnetic field to obtain zero-field skyrmions.As shown in Fig. R13a, the zero-field skyrmion density at low temperature (100 K) was relatively low, exhibiting non-uniformed size distribution.However, at higher temperature 250 K and 320 K, the zero-field skyrmion density gradually increased, and the size distribution became more uniform.
To clarify physical mechanism underlying the variation in skyrmion size at different temperatures, we simulated the zero-field skyrmions after field cooling (see Fig. R13 Note).As is known, the formation of skyrmions is determined by a delicate interplay of the magnetic parameters, including magnetic anisotropy Ku, DMI constant D, saturation magnetization Ms, sample thickness t, and exchange stiffness A. However, our experiments have demonstrated that increasing the sample temperature of Fe3-xGaTe2 leads to a significant reduction of magnetic anisotropy Ku, while other parameters keep nearly unchanged.Therefore, as shown in Fig. R13b, we decreased magnetic anisotropy constant Ku = 3.2 10 5 J/m 3 , 1.6 10 5 J/m 3 and 0.7 10 5 J/m 3 in the simulations to represent the increasing of sample temperature.Our simulations demonstrate that for large Ku at low temperature, the density of zero-field skyrmions is low, and the distant between the nearest skyrmions can be considerably large in certain regions, thus facilitating the expansion of skyrmion size upon the removal of the magnetic field.In contrast, with small Ku at high temperature, the skyrmions exhibit a densely hexagonal arrangement to each other, which suppresses the extension of the skyrmions.Consequently, they remain uniformly distributed after removing the magnetic field.
In summary, we discover that a larger Ku at low temperature results in lower skyrmion density, which exhibit a larger skyrmion-skyrmion distance and allow the expansion of skyrmion size.Consequently, the skyrmion size distribution at lower temperatures is non-uniform.In contrast, at higher temperatures, the lower Ku increases the skyrmion density and reduces the skyrmion-skyrmion distance, leading to a much more uniform size distribution.In the revised manuscript, we have incorporated discussions on skyrmion size variations into main text (refer to Page 13, Lines 358-360) and Supplementary information (refer to Supplementary Note 4 and Fig. S18).
Fig. R13 note:
To validate this experimental result, we conducted micromagnetic simulations of the field cooling process and then removing the external magnetic field.
Default magnetic parameters used in the simulations include A = 1.3 pJ/m, Ku = 0.8 10 5 J/m 3 , Ms = 2.5 10 5 A/m, D = 0.25 mJ/m 2 , and slab geometries with dimensions of 512 512 64, with a mesh size of 2 2 2 nm.Periodic boundary conditions were taken into account for large-scale simulations.The zero-field skyrmion state was relaxed from a random state with 30 mT magnetic field, and removing the magnetic field to relax until stable.
Referee B's Comment 5:
There needs to be more interpretation of the results and tying back to a structure-property relation for me to be comfortable recommending this paper.
Referee C's General Comment:
The authors investigate a non-stoichiometric room temperature magnet Fe2.86GaTe2 crystal where the Fe vacancies induce the formation of DMI by spatial inversion symmetry breaking.Such an in-plane isotropic DMI brings about RT Néel-type skyrmions, and the size of the skyrmions can be regulated by the sample thickness and the external magnetic field.The dynamic writing process of RT skyrmions in Fe2.86GaTe2 flakes enhances the potential application of spintronic devices.
The paper is timely and of interest.
Author's reply: We sincerely thank the referee for careful reading of our manuscript and pointing out that our paper is "timely and of interest".The further valuable suggestions and comments provided by the referee are greatly helpful to improve our manuscript.Below we answer the comments in a point-by-point basis.We hope the referee will be satisfied with the revised manuscript as well as our responses.
Referee C's Comment 1:
The Methods section describes the process for obtaining a Author's reply: We sincerely thank the referee for the valuable comments.In order to control Fe content, we have systematically grown a series of Fe3-xGaTe2 single crystals by varying the Fe content in the raw material composition, utilizing a Te-flux method.
Subsequently, comprehensive energy-dispersive X-ray spectroscopy (EDS) mapping was conducted on the cleaved surfaces of these crystals to determine their chemical composition.To ensure the reliability of the EDS results, mapping was carried out at four distinct areas for each sample.A comprehensive overview of the raw material composition and final product is outlined in Table R2.
Our experimental results clearly demonstrate that controlling the Fe content in the final crystals is achievable by varying the raw Fe ratio.Specifically, when the raw Fe ratio fall below 0.8, the Fe3-xGaTe2 phase cannot be formed.Instead, a mixture of phases, including GaTe and Ga2Te3 phases, is produced.In contrast, when the raw Fe ratio is equal to or greater than 0.9, Fe3-xGaTe2 single crystals can be crystallized, with the Fe content in these single crystals increasing proportionally with the raw Fe ratio.However, materials and the corresponding final crystal composition are listed in Table S1, Supplementary Fig. S1 and Fig. 1b.We found that the Fe deficiencies always exist in these crystals, while the minimum and maximum Fe contents correspond to Fe2.84±0.05GaTe2and Fe2.96±0.02GaTe2,respectively.This result implies the feasibility of inducing Fe deficiency in the samples.To highlight the existence of Fe vacancies, the subsequent studies were focused on the minimum Fe content sample Fe2.84±0.05GaTe2." Table R3.Summary of the raw material composition and the final product for the growth of Fe3-xGaTe2 samples using the self-flux method.Author's reply: We sincerely thank the referee for the valuable comments.In the previous version of our manuscript, we reported the observation of Néel-type skyrmions in the Fe3-xGaTe2 single crystals synthesized with a raw Fe ratio of 0.9.Their average chemical formula was denoted as Fe2.86GaTe2 for EDS mapping while Fe2.79GaTe2 for XRD refinement.It is crucial to highlight that all the crystals were grown using the same temperature conditions and raw material composition.The observed variation in the average chemical formula can be attributed to measurement errors associated with different characterization techniques.
In the revised manuscript, to enhance the accuracy of the chemical formula, an error bar has been added by summarizing the EDS results obtained at four different areas, and the chemical formula is denoted as Fe2.84±0.05GaTe2,which remains within the error range of the XRD refined chemical formula Fe2.79GaTe2.We have integrated error analysis of EDS determined chemical formula into both the main text (refer to Page 5, Lines 118-119) and Supplementary information (refer to Supplementary Note 1 and Table S1).
(see Fig. S19 and Supplementary Note 5 for the determination of magnetic parameters), which contribute to the reduction in skyrmion size (see Table S3, Fig. S20 and Supplementary Note 6 for the corresponding micromagnetic simulations)." Table R4.Magnetic parameters for skyrmion-host 2D materials.Author's reply: We sincerely thank the referee for the valuable comments.We have revised such expressions as recommended.Author's reply: We sincerely thank the reviewer for the valuable suggestions.We have replaced the figures as recommended.Referee B's Comment 2: I think there can be more explicit attention in the text to how pulsed writing is different from the quasi-thermodynamic state.E.g., as in the response: "Typically, without a magnetic field, zero-field cooling can only result in the formation of interconnected, relatively long stripe domains, but does not spontaneously lead to the creation of skyrmions (as show in Fig. R5a-c Author's reply: We sincerely thank the reviewer for the valuable suggestions.We have added the discussions as recommended.
Response to the Report of Referee
"Furthermore, conventional zero-field cooling can only result in the formation of interconnected, relatively long stripe domains, but does not spontaneously lead to the creation of skyrmions.To demonstrate the differences with in-site fs laser quenching approach, we conducted fluence-dependent laser pulse excitation without magnetic field."Referee B's Comment 3: The authors could also make the statement "spin clusters that contain topological defects such as skyrmionic and anti-skyrmionic nucleation centers (snapshot at t1) due to the ultrafast cooling at a quenching rate of up to 1012 K/s" (L457) more obvious to make the work read better.
Author's reply: We sincerely thank the reviewer for the valuable suggestions.We have revised the statements as recommended.
"As shown in Fig. 5d (see details in Movie S1), following the excitation by the femtosecond (fs) laser pulse, the initial melted spin state (snapshot at t0) rapidly evolves into numerous nanoscale spin clusters.These clusters contain topological defects, including skyrmionic and anti-skyrmionic nucleation centers (snapshot at t1).This transformation occurs due to the ultrafast cooling, achieved at a quenching rate of up to 10 12 K/s." Overall, the authors have rigorously answered the concerns in my initial review and I recommend the manuscript for publication.
Author's reply: We sincerely thank the reviewer for recommending our manuscript to be published in Nature Communications.
Referee C's General Comment:
In the revised manuscript and supplementary information, the authors have thoroughly characterised their sample, including EDS mapping and XRD data.In addition, they have made new discoveries regarding the Fei and Feii vacancies and provide a reasonable explanation for the symmetry breaking of the Fe2.84±0.05GaTe2crystal structure, which is also consistent with first-principles calculations.Overall, the authors have adequately addressed the reviewers' comments.
I recommend the manuscript for publication in Nature Communication.
Author's reply: We sincerely thank the reviewer for recommending our manuscript to be published in Nature Communications.The valuable suggestions and comments furnished by the referee are greatly helpful to improve our manuscript.
Fig. R1. a Experimental CBED patterns of Fe2.84±0.05GaTe2from [112 ̅ 0] direction.b Ray diagrams showing how increasing the C2 aperture size causes the CBED pattern to change from one in which individual disks are resolved to one in which all the disks overlap.
Referee A's Comment 3 :
The Fe columns in Fig.1fare not really clear.The atomic columns do not look clean.A better clear HAADF-STEM image will be better that shows better Fe Columns.Would the author claim that the Fe deficiency occurs only at those FeII positions?Author's reply: We highly appreciate the referee's comments.To provide a clearer view of the Fei and Feii columns, we have further acquired improved HAADF-and ABF-STEM images of the Fe2.84±0.05GaTe2sample along the [112 ̅ 0] zone axis, as shown in Fig.R3.To highlight the detailed information about the Feii columns, a magnified image was derived from the enclosed region of the ABF-STEM image, as shown in Fig.
Fig
Fig. R4. a Magnified ABF-STEM image of the single Fe2.84±0.05GaTe2layer.b Integrated imaging intensity line profile along the c-axis within the area marked by the Te-Feii-Te atoms in the blue rectangles.The red region indicates the Feii deviation from the centers of Te-Te atoms.c Integrated imaging intensity line profile of Fei-a-Fei-b atoms.Referee A's Comment 4: Can the author control the Fe content?
(
iii) Furthermore, we explored the correlation between Feii deviation values δc (Feii − Ga) and the DMI constant D based on the DFT calculations.It is found that no DMI is observed (the value of D is equal to zero) in a centrosymmetric structure without Feii deviation (δc = 0).However, once the Feii atom deviate from Ga plane (δc < 0) due to the asymmetric Fei-a vacancy, the inversion symmetry is broken and the value of D increases accordingly as the increase of the Feii deviation.It should be noted that the calculated value of D is comparable to that established in experiments, confirming the reliability of our structural model.We appreciate the referee's recommendation of the excellent work on (Fe0.5Co0.5)5GeTe2and polar van der Waals magnets.All relevant references have been appropriately cited in the main text to acknowledge the valuable contributions of the prior work in the field.Referee B's Comment 1: And while the in-situ measurement is new, its value would come from actual dynamical measurements-by doing just quasistatic quenching, it doesn't seem like there is functionally any difference between just T, B cycling without the optical pump (Meisenheimer et al.Ordering of room-temperature magnetic skyrmions in a polar van der Waals magnet.Nat Commun 14, 3744 (2023); Zhang et al.Room-temperature skyrmion lattice in a layered magnet (Fe0.5Co0.5)5GeTe2.Sci.Adv.8,eabm7103(2022)).Especially so because you motivate the experiment from the perspective of "ultrafast writing of skyrmions" (L 104, L 313).Author's reply: We are grateful for the referee's positive feedback and suggestions regarding the in-situ measurements.Femtosecond laser (fs) control of topological magnetic structures is a promising and still relatively unexplored field, involving complex physical processes such as ultrafast demagnetization [Nature Communications 14.1 (2023): 1378], optically induced magnetism [Physical Review Letters 125.26 (2020): 267205], optical-pumped spin dynamics, all-optical magnetization reversal, and more.It is widely known that, magnetic skyrmions in twodimensional materials are often metastable and typically require temperature-magnetic field (T-B) cycling [Nature Communications 13.1 (2022): 3035; Nano letters 22.19 (2022): 7804-7810], which is time-consuming and energy-intensive.In contrast, fs laser pulse-induced skyrmion writing, based on its unique quenching effect, offers the advantages of fast speed and low energy consumption [Nature Materials 20.1 (2021): 30-37].Moreover, it also allows for the adjustment of laser spot size and location, enabling selective writing in specific regions [Nano Letters 18.11 (2018): 7362-7371].Consequently, there has been significant interest in laser-induced change and switching of topological spin textures in recent years, which allows the exploration of metastability and hidden phases of topological spin textures [Science Advances 4.7 (2018): eaat3077].
[
Fig. R5.a, b, c.Ground states of the stripe domains obtained through zero-field cooling.d,e, f.Magnetic domain states after a single fs laser pulse with the fluence of 1.3 mJ/cm 2 , 9.4 mJ/cm 2 , and 11 mJ/cm 2 , respectively.The red boxes indicated isolated skyrmions after a single fs laser pulse.
Feii
Fig.R7aand R7b illustrate the scenario with no Fe vacancy.The electron density (colored in yellow) strongly overlaps between Feii-Ga atoms, forming the Feii-Ga honeycomb lattice plane with robust chemical bonding (highlighted by black dashed lines).Simultaneously, the Fei-a and Fei-b dimers are located at the center of the Feii-Ga honeycomb lattice but do not bond with Feii-Ga atoms.Consequently, the chemical bonding is mirror-symmetric along the Feii-Ga plane, leading to the absence of Feii displacements.Thus, perfect Fe3GaTe2 exhibits a centrosymmetric crystal structure with c → c mirror symmetry.
Fig. R7 .
Fig. R7.The relaxed crystal structure and corresponding electron density of Fe3-xGa atoms sliced from Fe3-xGaTe2 with a, b no vacancy, c, d Feii vacancy and e, f Fei-a vacancy.The black dashed line indicates the chemical bonding with electron-density overlapping.The yellow-colored electron densities are shown at the same isosurface value.
Fig. R8.Schematic illustration of DMI in asymmetric layers viewed from [112 ̅ 0] zone axis.The red arrow D1 represents the direction of DMI vector in the upper triangle composed of Fei-Feii-Te, while the blue arrow D2 represents the lower part in the opposite direction.The black arrow Deff represents the sum of the non-zero DMI vector.
Fig
Fig. R9.a, b Spin configurations implemented to calculate the DMI for clockwise (CW) and anticlockwise (ACW).c The calculated and experimental results for the relationship between the DMI and the Feii deviation.
Fig.
Fig. R9 note: The calculation of the DMI vector involved two steps.First, structural
Fig
Fig. R12 Note: Micromagnetic simulations were carried out using the GPU-accelerated micromagnetic simulation program Mumax 3 .Default magnetic parameters used in the simulations include A = 1.3 pJ/m, Ku = 0.8 10 5 J/m 3 , Ms = 2.5 10 5 A/m, D = 0.25 mJ/m 2 , and slab geometries with dimensions of 1024 1024 16, with a mesh size of 2 2 4 nm.Periodic boundary conditions were taken into account for large-scale
Fig
Fig. R13. a Lorentz Phase images of zero-field skyrmion after 30 mT field cooling at 100 K, 250 K and 320 K. b Micromagnetic simulations of zero-field skyrmion after 30
Fe2.86GaTe2 single
crystal, which was achieved directly by precisely controlling the initial molar ratio of the powder mixtures and growing conditions.I am inquiring about the method of determining the optimum molar ratio in this work?by experiment or theoretical calculation.And what is the advantage of using iron deficiency as a means of breaking the centrosymmetric structure compared to other methods, such as elemental doping (Ref.31)?
Fig. R14.Variation of simulated magnetic structure and corresponding skyrmion sizes with varied a, b DMI constant D, c, d saturation magnetization Ms, e, f sample thickness t, g, h magnetic anisotropy constant Ku, and i, j exchange stiffness A.
Fig. R14 note :
Fig. R14 note: Micromagnetic simulations were carried out using the GPU-accelerated micromagnetic simulation program Mumax 3 .Unless specified otherwise, default magnetic parameters used in the simulations include A = 4.0 pJ/m, Ku = 2.4 10 5 J/m 3 , Ms = 3.0 10 5 A/m, D = 0.90 mJ/m 2 , and slab geometries with dimensions of 512 512 64, with a mesh size of 2 2 2 nm.Periodic boundary conditions were taken into account for large-scale simulations.The initial skyrmion state was relaxed from a random state with 60 mT magnetic field.Employing the initial skyrmion state as the input, we systematically varied the magnetic parameters-D, Ms, t, Ku, and A individually-subsequently allowing the magnetization to evolve into a stabilized state through relaxation processes.
The authors have provided additional data to support the determination of the new phase.I am satisfied with the response, and I would recommend to proceed to publication.Just a note to the authors, it's easier to use CBED whole pattern and look into the HOLZ lines to check the symmetry.Author's reply: We sincerely thank the reviewer for recommending our manuscript to be published in Nature Communications.The valuable suggestions and comments furnished by the referee have significantly contributed to enhancing the quality of our manuscript.Response to theReport of Referee B Referee B's General Comment: The authors have clearly spent time answering my and the other reviewer's concerns and with the clarity that has been added to the introduction, namely the larger emphasis on the role of vacancy ordering, I am much more comfortable recommending this work for publication.Author's reply: We sincerely thank the reviewer for recommending our manuscript to be published in Nature Communications.Referee B's Comment 1: I think there are a few textual things that could be changed to increase the impact of the work, but the story and the physics are much clearer in the newer version.I think a subset of the SEAD patterns from Fig R2 should be added to Figure 1-the superlattice peaks are the strongest evidence for ordering and help explain the DMI.This could replace Fig 1g, since it is the same information but clearer.
The space group changed to non-inversion symmetry based on the XRD and HAADF-STEM image.It will be more convincing if additional data using CBED to confirm the space group.
Table R2 .
Summary of the raw material composition and the final product for the growth of Fe3-xGaTe2 samples using the self-flux method. | 9,986.2 | 2024-02-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Exploring Mars with Returned Samples
The international Mars Exploration community has been planning to return samples from Mars for many years; the next decade should see the plans becoming a reality. Mars Sample Return (MSR) requires a series of missions, first to collect the samples, then to return them to Earth, whilst preventing the contamination of both Earth and Mars. The first mission in the campaign, Mars 2020, will land at Jezero Crater in early 2021; samples should return to Earth sometime after 2032. The information to be derived from analysis of martian samples in terrestrial laboratories equipped with state-of-the-art instrumentation is more than recompense for the difficulties of the MSR campaign. Results from analyses of returned samples will enable increased understanding of martian geological (and possibly biological) evolution. They will facilitate preparations for human exploration of Mars and by providing a second set of absolute ages for a planetary surface will validate (or otherwise) application of the lunar crater-age scale throughout the Solar System.
Introduction
Ever since the Mariner 9 mission of 1971-2 returned images of the martian landscape showing networks of craters, dried-up river valleys and towering (but extinct) volcanoes (Mutch and Saunders 1976), it has been known that Mars experienced impact, fluvial, volcanic and aeolian processes -and a potential for martian life to develop. Knowledge of the extent and complexity of these processes has increased in detail with each succeeding space mission. We now have global scale coverage of the planet at visible wavelengths and almost Fig. 1 (a) The EETA 79001 meteorite, showing areas of black glass on its cut-surface (image credit NASA-JSC); (b) Compositional evidence that gas trapped inside EETA 79001 has the same composition as Mars' atmosphere (reproduced from Pepin 1985) total coverage in the infrared. We know the distribution of craters across the planet's surface, the location of the main volcanic regions, the existence of a complex network of fluvial features and the composition and dynamic properties of the atmosphere. This has given us a broad idea of Mars' evolutionary history based on a relative chronology that ties together the different processes (Tanaka 1986). Changes in mineralogy brought about by aqueous alteration have been observed from orbit and at the landing sites of the Spirit, Opportunity and Curiosity rovers (e.g., Ehlmann and Edwards 2014;Ruff and Farmer 2016;Bedford et al. 2019). These changes, when related to a cratering chronology, can be modelled to yield information about the timing, extent and duration of fluid flow, as well as constraining the temperature and composition of the fluid (e.g., Griffith and Shock 1997;Ehlmann et al. 2011;Bridges et al. 2015;Zolotov and Mironenko 2016).
Complementary information about Mars is obtained from martian meteorites, a group that has been recognised since the Antarctic meteorite Elephant Moraine (EETA) 79001 was found to contain pockets of clear black glass in which gas with the composition of Mars' atmosphere was trapped (Bogard and Johnson 1983;Becker and Pepin 1984;Pepin 1985 and Fig. 1).
It was inferred that EETA 79001 had been ejected from Mars during an impact cratering event. Once this suggestion was accepted, analysis of oxygen isotopic composition indicated that other igneous meteorites with young crystallisation ages had the same source (Clayton and Mayeda 1983;Franchi et al. 1999). Since then, the number of samples of Mars has grown to over 240 -although the number of individual meteorites is unknown because of the difficulty of matching samples that fragmented prior to collection. The Meteorite Database at https://www.lpi.usra.edu/meteor/metbull.php provides regular updates of this figure. Differences in mineralogy and mineral chemistry imply that they represent at least nine different rock types coming from a number of discrete impact events from different regions of the planet (Righter 2017). Unfortunately, whilst comparisons can be made between the compositions of the Martian meteorites (as measured in a laboratory) and composition of Mars (as measured by orbiting and landed craft), it is not possible to determine the specific region of Mars from which any of the meteorites has been derived (Werner et al. 2014). Thus, their utility in addressing questions regarding the origin and evolution of Mars has limitations. In addition, the meteorites are all igneous rocks (apart from the breccia NWA 7034 and its pairs) so their ability to record evidence of martian biological activity in a hydrous sedimentary context is limited. Admittedly, several of the samples (particularly the nakhlites and the orthopyroxenite ALH 84001) contain assemblages of secondary minerals (combinations of phyllosilicates, carbonates, halite, etc.) produced by water on Mars (e.g., Bridges et al. 2001;Melwani Daswani et al. 2016;Lee et al. 2018) infiltrating fractures and which have the potential to harbour traces of life, but to date, no such traces have been definitively recognised (McKay et al. 1996;Steele et al. 2007).
There are, then, three highly complementary and detailed sets of data from which information about Mars may be inferred. Images and spectral information from orbiting spacecraft have provided global coverage of landforms and associated mineralogical variations. This has allowed a relative chronology for fluvial and volcanic activity to be established. Information obtained by instruments on rovers and landers have enabled much more detailed investigation of smaller, more specific areas on the planet. Finally, the absolute ages of meteorites ejected from Mars during impacts has helped to constrain the timing of periods of volcanic activity and aqueous alteration. Measurement of the elemental and isotopic composition of secondary minerals have informed models of fluid flow and water-rock ratios to illustrate how different generations of alteration relate to each other (Melwani Daswani et al. 2016).
Despite all this information and the insight it has given to Mars' evolutionary history, each dataset is lacking in some way, such that we cannot yet construct a complete (absolute) timeline for Mars. Data from orbiting spacecraft define a relative chronology and give context to data from rovers and landers. Data from rovers and landers provide more detailed chemical and mineralogical compositions to refine global stratigraphy whilst martian meteorites provide absolute ages and detailed compositional data for specific phases but without any contextual information. The most logical next step to increase knowledge about Mars is to bring samples from known locations back to Earth for analysis.
Return of samples from the surface of Mars has been a goal of the international Mars exploration community for many years. There has been much discussion of the profile of a sample return mission, including comparison with the return of the Apollo 11 samples from the Moon. Almost as soon as Armstrong set foot on the lunar surface, he collected a 'contingency sample'. This comprised a few scoops of lunar regolith from close to the lander (Kramer et al. 1977). The idea behind the collection was that if things became non-nominal (i.e., went wrong) and the astronauts had to leave the lunar surface rapidly, they would at least have some material to bring back. Following that example, the utility of a 'grab and go' mission to Mars has been considered. However, as understanding of the interplay between different reservoirs (atmosphere, hydrosphere-cryosphere and lithosphere) has increased and realisation of the potential habitability of Mars has grown, it has become clear that return of a 'grab and go' (i.e., randomly-collected) sample would be insufficient to answer the major questions of interest about Mars, viz the absolute age of the bedrocks and the potential for life in the subsurface (Beaty et al. 2019). The most recent iteration of a Mars Sample Return mission is described below.
The Pros and Cons of Ex Situ Versus in Situ Analysis of Martian Samples
There are (at least) four advantages to analysing a sample back on Earth (ex situ) compared with making analogous measurements on Mars (in situ). There are also several disadvantages to removing a sample for measurement elsewhere. We will consider the advantages of sample return first. In doing so, we make the assumption, just as is recommended in the iMOST report (Beaty et al. 2019), that suites of samples are collected from a locality rather than single specimens. It is also assumed that appropriate instrumentation is available and operational.
Advantages of Ex Situ Analysis
(i) Sample preparation techniques: a major advantage of analysis of returned samples compared with analysis of a sample in situ on Mars' surface is that more sophisticated sample preparation techniques can be employed to obtain high quality and high precision information. One of the most widely-applied techniques in geological investigations is to make a polished section from a rock sample. A single chip of rock with dimensions of less than 2 × 2 × 2 mm, once embedded in resin, can be sliced to produce several sections or mounts. An entire panoply of measurements can be made on a single polished mount at high spatial resolution revealing details that are invisible at lower magnifications. A polished mount is suitable for analysis by electron microscopy to determine the major and minor element compositions of specific minerals, the texture, grain size and luminescence properties of the rock, its alteration by fluids and its structure and shock history. The mount would also be available for examination by other microbeam techniques, including electron microprobe (for trace elements) and ion probe (for elemental and isotopic compositions). Similarly, spectroscopy (across infrared, visible and ultraviolet wavelengths and Raman) could be performed on a polished mount to obtain structural and organic information. Other important preparation techniques are possible in a terrestrial laboratory that cannot (as yet) be performed on Mars' surface. For example, the picking of individual, specific mineral grains from a specimen, followed by dissolution of the grains in a series of acids for age-dating. Or processing of solid materials through sequential solvent and acid extractions and derivatisation for the analysis of organic molecules. Although the SAM instrument suite on Curiosity is able to perform some wet chemistry experiments, including derivatisation (Freissinet et al. 2015), it is only carrying a very limited number and amount of chemicals. This precludes the more sophisticated analyses carried out in organic chemistry laboratories on Earth, where different 'recipes' for extractions can be substituted if appropriate. (ii) Range of analytical techniques and instruments: another of the major advantages of having samples returned to Earth is the range of analytical techniques that can be applied to them. All missions are restricted in terms of payload: factors such as cost, mass, volume, power requirements and data transfer (bandwidth) limits constrain choices of instruments and place limits on the ensemble of measurements that can be made by any specific mission. Additionally, although instruments onboard rovers are now capable of determining the mineralogy, mineral chemistry, volatile elemental and isotopic compositions of rocks and soil on the surface of Mars, there are still gaps where information is missing because analytical techniques have not yet been adapted or developed for deployment on spacecraft. For example, a CT scanner is a large piece of equipment that is becoming almost a 'workhorse' instrument to produce 3-D images of rocks, showing texture, porosity and composition. If attached to a synchrotron source, the measurements can be at very fine-scales on samples as small as individual grains. It is unlikely, in the foreseeable future, that such an instrument could be miniaturised for deployment on Mars.
(iii) Responsive investigation pathways: A further advantage of laboratory analysis is that investigation pathways are responsive to each discovery. Measurements using different techniques on a single aliquot of material (which might be chip, powder, polished mount, separated minerals, etc.) can run in sequence and be designed so that one technique does not interfere with results expected from a technique to be applied subsequently. Measurements can be repeated (by the same or different analysts) for internal checking and calibration. Whilst planning of an analytical sequence is essential to get the best possible results from a limited amount of material, because the measurements are taking place in 'real time' (i.e., not autonomously on a planetary surface where instructions must be uploaded ahead of time), changes to a sequence can be made if unusual results are produced from one technique that might require verification from a different point on the analysis pathway. Finally, the repeatability of critical measurements is readily achievable by inter-laboratory comparisons on Earth, but is impractical on Mars. (iv) Spatial and spectral resolution: Instruments on orbiting spacecraft are generally returning data acquired at low magnification or relatively low spectral and spatial resolution from large areas of the martian surface. The instruments are constrained by limited power availability, mapping geometry, data storage capacity and bandwidth and communication timelines. Although a vast library of information has resulted from orbiting missions, there are gaps where the footprint of the instrument is on a larger-scale than the features being mapped. For instance, one of the outstanding questions about Mars' changing climate is the extent and duration of fluid flow on the surface. The distribution of clay minerals (and other alteration products) is of critical importance in trying to establish the temperature and timing of different fluvial events (e.g., Bishop et al. 2008;Carter et al. 2013), yet differentiation between phyllosilicate species is not always possible from orbit (Ehlmann and Edwards 2014). Similarly, taking a closer view of process-related and diagnostic textural associations using rover-based instruments may show discrepancies between orbital and surface results if there is a much greater variety of minerals present at a site than can be detected from orbit (e.g., Mangold et al. 2017;). There are definitive analyses (e.g., mineral crystal structure determination by X-ray diffraction) that could be performed on the surface, but appropriate instrumentation has rarely been deployed, thus limiting the intercomparability of results among multiple surface-mission operation areas (Velbel 2018). Laboratory-based equipment can be employed to make measurements at finer spatial scales and get more accurate results for the individual proportions and compositions of minerals present in a mixture of components. As well as gaining information on the source regions for the different minerals, leading to understanding of the environment in which the rock formed, results can be used to calibrate orbital measurements so that the occurrence of the same compositional mixture can be mapped across the surface with greater confidence.
Disadvantages of Ex Situ Analysis
The disadvantages of removing a sample from its environment prior to analysis revolve around changes that might occur because the sample is no longer in thermal or redox equilibrium with its surroundings. Before the material arrives back on Earth, it will have spent at least 10 years encapsulated within sample tubes. For part of that time, the tubes will be on Mars' surface experiencing diurnal and annual thermal cycling that might differ from the cycles the samples experienced when in situ. This has the potential to alter the samples, which up to the point of collection would have been in chemical equilibrium or steady state with the surrounding strata. Following that, the material will be in orbit around Mars or Earth, or in interplanetary space, each time experiencing a different set of thermal (and gravitational) conditions. However, when the sample tubes are sealed, the material inside will represent the conditions at the time of their collection. The environment of collection will be known and recorded, so enabling back calculation of the initial conditions of the material. This is important if we wish to infer water activity, redox potential, etc.
There are specific mineral species that are sensitive to changing environmental conditions, including phyllosilicates (clay minerals) and hydrated salts. Experiments have shown that reactions between anhydrous minerals and water (especially as brine) occur very rapidly, on a timescale of days, even at low temperature (Phillips-Lander et al. 2019). Hence reactions between mineral assemblages in the MSR collection tubes with water of crystallisation or inter-layer water released from clay minerals could change the chemical balance of the system and not be a true reflection of the hydration state of the original material collected. There could even be movement of cations during the reactions -for example, removal of aluminium from plagioclase would alter fluid composition and change the products of serpentinization of olivine (Pens et al. 2016). The conversion of Fe-and Mg-bearing phyllosilicates to Al-bearing silicates is a key marker for changing environmental conditions on Mars (Bishop et al. 2018). It is essential, if we are to be certain that the samples returned to Earth can be interpreted as representative of the material that was collected on Mars, that analyses are performed of the material whilst it is in situ on Mars.
MSR Mission Design
As currently planned, Mars Sample Return is not a mission, but a campaign of several missions; one possible architecture for the campaign, with the different component missions is shown in Fig. 2. In this scenario, the first mission is NASA's Mars-2020 rover, due to launch in July 2020. The rover has the same chassis as the highly-successful Curiosity rover, with a different complement of on-board instrumentation. An important part of the equipment is a drill which will produce small cores, about 5 -6 cm long and 1 cm in diameter. The cores will be stored in sample tubes that will be cached on Mars' surface for retrieval by a subsequent mission. The Mars-2020 rover will land at Jezero Crater (Fig. 3); the prime mission aims to explore part of the delta that once debouched into the crater. The rover will collect and cache about 20 samples from different localities on the crater floor. If an extended campaign is approved, the rover would traverse the crater wall and navigate towards the Noachian deposits of NE Syrtis, caching a further set of samples (∼20).
The return part of the MSR mission is co-funded by ESA and NASA. It comprises four separate elements carried on two spacecraft, both of which are currently scheduled for launch in 2026. The first launch, the NASA-led Sample Retrieval Lander, will carry three of the elements: NASA's Mars Ascent Vehicle (MAV), ESA's Fetch Rover and the Orbiting Sample canister (OS). The rover will collect the samples cached by Mars-2020 and return them to the MAV where they will be loaded into the OS canister; either the rover or the OS could carry tubes designed to collect atmospheric samples. The MAV will carry the OS canister for release into low Mars orbit.
The second launch, the ESA-led Earth Return Orbiter acts as a communications relay for the lander. Once the OS is launched into Mars orbit, the orbiter will pick it up, encapsulating it in the Earth Entry Vehicle and return it to Earth orbit. The planning schedule has landing of the Earth Entry Vehicle in 2032. The mission needs this level of complexity with different vehicles launching from Mars and returning to Earth in order to 'break the chain' of contact between the two planets. This is to conform with planetary protection requirements designed to ensure no contamination of the Earth by material returned directly from Mars (Rummel et al. 2002).
Jezero Crater
Jezero Crater (77.5°E 18.4°N) was selected as the landing site for Mars 2020 in October 2018. It is a 45 km diameter crater on the west margin of Isidis Basin and just north of NE Syrtis (Fig. 3a). In the Late Noachian-Early Hesperian, there is thought to have been an open-basin paleolake up to 250 m deep within the crater (Goudge et al. 2012) into which a river debouched leaving behind a series of deltaic sediments (Goudge et al. 2017(Goudge et al. , 2018. The remains of a fan-shaped delta can clearly be seen in Fig. 3b. The lake drained a vast watershed to the South and West of Jezero (Fassett and Head 2005); sediments transported into the lake should be a mix of the erosion products of the differing rock types of the source region. The Mars 2020 rover will land on the crater floor, probably on to the series of lacustrine deposits that include olivine-and magnesium carbonate-bearing rocks . Above these pale-coloured sediments that cover the original basement floor is a series of deltaic deposits displaying a range of alteration species (Goudge et al. 2017). The whole is capped by a unit that may be a lava flow or volcanic sediments derived from a flow (Goudge et al. 2015;Ruff 2017).
In sum, Jezero Crater is a sequence of lacustrine and fluvio-deltaic deposits, sampling of which during the prime mission would respond to the iMOST objectives for exploration of a sedimentary system (Objective 1.1; Beaty et al. 2019). If the mission is extended, then the (current) plan is to traverse the crater rim to collect material from the rim and of the Noachian-age mega-breccia related to the volcanic units of NE Syrtis. It is hoped that samples returned from Jezero would include material from the basement unit of the crater floor (if there are any exposures) plus samples of lacustrine sediments and deltaic material.
What Might Be Learnt from the Jezero Crater Samples?
As stated above, samples returned from Jezero Crater will be appropriate to address Objective 1.1 of the iMOST objectives: Interpret the Primary Geologic Processes and History that Formed the Martian Geologic Record, with an Emphasis on the Role of Water (Beaty et al. 2019). In other words, returned samples will be used to understand the processes, timing, geochemistry, and biological potential of a sedimentary system. Lacustrine basins like that of Jezero Crater keep a good record of paleoenvironmental and paleoclimatic conditions because they often have a continuous stratigraphy (e.g., Smoot and Lowenstein 1991;Renaut and Last 1994), independent of whether or not the basin has been open (i.e., has outflowing rivers or streams or has over-flowed a crater margin) or closed. Knowing already that Jezero Crater contains rocks produced by fluvial, lacustrine and aeolian processes (Goudge et al. 2012(Goudge et al. , 2015(Goudge et al. , 2017(Goudge et al. , 2018, then acquisition of samples from the different strata will be essential to disentangle the history of the Jezero palaeolake and its surrounding catchment area. As well as the provenance of the material that formed the sediments, specific processes that operated on the sediments would be studied, including diagenesis, weathering, transport and deposition. Once the history of surface and near-surface water and its interactions with the sediments and the atmosphere is developed, a record of the ancient climate of Mars preserved in the resultant rocks should be revealed, constraining models for the range of past climate variations. The results will enable construction of an evolutionary timeline for the basin, providing absolute ages for sedimentary deposition under different environments (fluvial, lacustrine, aeolian). If the current understanding of the geology of Jezero Crater is correct, then samples obtained from the crater floor should be volcanic, further elucidating the history of the crater, presumably placing an upper limit on its formation age.
A significant parallel set of investigations to the chronological question is the search for preservation of organic material and assessment of the biological potential of the material. One of the reasons that Jezero crater was selected as the landing site for Mars 2020 is that palaeolakes are an ideal location for the deposition and preservation of organic material. Samples returned from the crater may contain evidence of martian life, even if at the biomarker level rather than as fossils (Summons et al. 2011).
An additional benefit of studying a suite of samples from Jezero crater is that the results will allow reconstruction of an entire martian sedimentary system, from which it should be possible to apply the results to other sedimentary regions observed from orbit.
Using Returned Samples for Evaluation of Potential Hazards to Human Exploration
The return of samples to Earth from Mars is also an important part of any Human Exploration programme (NASA 2009). At the most basic of levels, the mission would demonstrate that a spacecraft could lift off from a body with significant gravity and an atmosphere and return to Earth safely. As well as this most fundamental aspect of the relevance of MSR, the iMOST report established three separate areas in which returned samples would contribute: planetary protection, engineered systems safety and hazards to human health. Planetary protection has been discussed earlier and there are no specific additional concerns for human space exploration. The safety, efficiency and effectiveness of engineered systems, such as breathing apparatus, transport, power generation, etc., are likely to be reduced following exposure to martian dust. There are many examples from the Apollo missions (e.g., Gaier 2005; Kobrick and Agui 2019) where dust abraded space suits, infiltrating joints and seals, and the likelihood of a similar effect on Mars is high. The presence of a martian atmosphere means that wind can lift and transport dust, turning the dust into a very efficient agent of abrasion; the long duration of a martian mission would cause exposed surfaces to be scoured and seals to degrade. Whilst there is a need to test the resistance of spacesuits and other systems to the effects of martian dust, it is unlikely that samples returned from Mars would be available for such experiments, even if they were suitable. The MSR mission currently being planned will return limited amounts of sample, mainly rocks. It is not scheduled to collect airfall dust, which is the material required in relatively large quantities for testing. However, the returned tubes, which will have been exposed on the martian surface for around 10 years, will almost certainly be covered in dust -and it is possible that this material might be suitable for the abrasion testing. What is likely to be more useful, though, is that collection and characterisation of the airfall dust from the exterior surfaces of the sample tubes will help in production of a high-quality dust simulant. The grain size, shape, angularity, composition and density of the airfall dust will be replicated and large quantities synthesised, enabling large-scale testing of engineering systems to be undertaken.
Infiltration of dust through joints and seals is a hazard to astronaut health as well as to engineering systems. Analysis of the returned samples, especially the airfall dust, would take place as part of the planetary protection procedures to determine whether there were any biological or geochemical hazards in the samples that might affect humans. "Breaking the chain of contact" when leaving Mars is currently a requirement for containment of all material returning from Mars including any mission items that have been exposed to the martian environment. Whilst this can be put into practice for autonomous missions, it is certainly not practicable for missions involving astronauts -no matter how well-designed future generations of space hardware (including spacesuits) are, it is highly likely that astronauts will come into contact with martian dust, as was the case for Apollo astronauts and lunar dust (NRC 2002;Gaier 2005;NASA 2009;COSPAR 2011). One of the most important goals of sample return will be to determine the toxicity of martian material, and whether or not it comprises a biological hazard.
Science Goals
There are many questions that remain unresolved about Mars. The iMOST report (Beaty et al. 2019) organised the questions into seven objectives ( Table 1). Five of the objectives related to understanding the geological evolution of Mars, possible interactions between the geosphere and potential biosphere and signatures of life. The remaining two objectives were in support of human space exploration. The first objective was specified to 'Interpret the primary geologic processes and history that formed the martian geologic record, with an emphasis on the role of water'. It had Fig. 4 Timeline of major events in Mars' history, with the geologic aeons of Earth. Question marks indicate cases where processes could also have occurred earlier but the geologic record is obscured by subsequent events. Figure and caption adapted from Wordsworth (2016). Based on data from Ehlmann et al. (2011), Fassett andHead (2011), Head and Pratt (2001), Werner (2019) and Werner and Tanaka (2011). Martian meteorite data from Borg et al. (1999), Swindle et al. (2000), Nyquist et al. (2001), Agee et al. (2013). Gale Crater data from Farley et al. (2013) five sub-objectives, related to different geological environments (sedimentary, hydrothermal, deep sub-surface groundwater, sub-aerial and igneous). Since selection of Jezero Crater as the landing site for the Mars 2020 rover, the first sub-objective now relates directly to a sedimentary environment. We highlight here two headline issues that demonstrate the importance of sample return in order to address the science goals.
An Absolute Age for the Martian Surface
One of the most significant pieces of information missing from our understanding of Mars is its age. We know that it aggregated from the same presolar cloud as the rest of the Solar System, so its fundamental age is about 4567 Ma. We also know that there has been significant differentiation of the planet and that it has a crust, mantle and core. The core is almost certainly solid, as is the mantle. Surface morphology and specific features (impact craters, volcanoes and river and valley networks) give relative chronologies (Fig. 4); combined with almost global coverage of high-resolution imagery, a detailed timeline for the physical evolution of Mars can be constructed (Tanaka et al. 2014). Although the extent of spectroscopic data (for chemical and mineralogical compositions) is less complete than that of imagery, it has also been possible to construct a timeline for the chemical evolution of the planet that can be matched with the physical timeline. There are, however, no absolute ages of either bedrock or regolith that can definitively tie down the chronology (e.g., Bibring et al. 2006;Murchie et al. 2009;Ehlmann et al. 2016). We can place some specific anchor points on the timeline from the absolute ages of martian meteorites. The different compositional types have distinct ages which can be linked with the cratering and compositional chronologiesbut cannot definitively assign dates to specific events or epochs because we lack knowledge of the specific sites on Mars from which the martian meteorites were ejected. So we have a situation with two end-members: martian meteorites give us an absolute age, but no geographical context. Remote sensing data acquired by orbiting spacecraft have yielded relative ages for physical and chemical events but have no way of anchoring the events to absolute ages.
Instruments on-board the Curiosity rover measured the K-Ar of the Sheepbed Mudstones within Gale Crater as 4.21 ± 0.35 Ga (Farley et al. 2014). A second determination, of the Windjana sandstone failed (Vasconcelos et al. 2016). A third age measurement, of the Mojave 2 mudstone was undertaken in two temperature increments, to distinguish secondary alteration products from primary material (Martin et al. 2017). The results showed that detrital plagioclase (presumably from older igneous rocks) had an age of 4.07 ± 0.63 Ga whilst jarosite produced by subsequent weathering had an age of 2.12 ± 0.36 Ga. These ages are a start in construction of a timeline for the sedimentary history of Gale Crater and the chronology of martian secondary processes but are not able to address issues of the nature and inferred history (primary differentiation and crystallisation) of the mantle source of the magmas from which the sedimentary rocks were derived.
The samples collected at Jezero Crater will mainly be sedimentary in nature and so, by definition, are not primary igneous materials that record a formation age. It is hoped that at least some of the crater floor will be exposed for collection of (potentially) late Noachian volcanic flows. Nonetheless, some of the most ancient ages for terrestrial rocks have come from analyses of detrital minerals in sedimentary-derived materials (Cavosie et al. 2004). U-Pb ages of individual grains of zircon and monazite from terrestrial impact craters have retained their original formation age despite having experienced subsequent hydrothermal processing (Moser 1997). The sediments from Jezero crater may provide analogous materials, as the sediments that fill the crater have been drawn from a catchment area that encompasses primary igneous rocks. Sample return is definitely essential for age-dating studies of this nature, because of the necessity to separate individual grains that have not been altered by subsequent processing.
Organics and Life?
The driving force for almost all martian exploration has been the prospect of finding life on the planet -if not extant life, then signs that life had been present at times in Mars' past. The first attempt to find life on Mars came with the Viking mission of 1977 (Klein 1978). The two landers each carried an experiment package to test the martian soil for evidence of metabolic activity. The results were almost uniformly negative -and the sole positive result was entirely explicable in terms of a chemical, not biological, reaction (Navarro-González et al. 2010). Since the Viking experiments, we have learnt an enormous amount about the martian surface, especially about the range of highly-oxidising salts (especially perchlorates) present in the soil (Lasne et al. 2016). The salts, together with the ever-present solar UV radiation, combine to ensure that the martian surface is probably free of the types of organic molecules that might be diagnostic of living cells. However, there may well be a more benign environment for survival of organic material below the surface. In July 2022, ESA's ExoMars 2022 mission will be launched; one of its main goals is to search for evidence of organic compounds. To do this, it will deploy a 2m-length drill to penetrate below Mars' surface in order to search for organic (and biological) material below any likely oxidised horizons. This will be the first time that rocks that have not been exposed to surface radiation will be analysed -and it is hoped that the presence of biologically significant molecules will be detected.
One tenet that underlies the search for life on Mars is that it is likely to be carbon-based. There are good physical and chemical explanations for why this is an acceptable assumption, especially given that Mars and Earth were formed from the same starting materials at the same time and for the earliest part of their histories experienced the same processes (impact bombardment, differentiation, etc.). The presence of reduced carbon is taken to be a significant marker for the potential discovery of evidence of life or prebiotic chemistry in a sample (Sephton and Botta 2005). The sediments of Gale Crater have been shown to be relatively rich in organic carbon (Freissinet et al. 2015;Sutter et al. 2017), but as yet have not yielded any evidence for definitive biological signatures.
It is hoped that the samples returned by the current MSR campaign will be collected from below horizons affected by solar irradiation. The second objective of the iMOST report (Beaty et al. 2019) concerned the range of investigations that should be undertaken to investigate the likelihood of life being present in the returned material. The objective was sub-divided into three: firstly, to investigate carbon -its occurrence, sources and characteristics -to determine its biotic or abiotic nature. The second part is designed to look for signs of ancient (fossilised) life by comparing the structure of any carbon-bearing entities found in the martian samples with ancient terrestrial biosignatures. Finally, there would be a search for signs of current life, again using the presence of biosignatures as a diagnostic tool.
Although organics have been measured in situ by the Curiosity rover, the series of chemical derivatisations intended to characterise specific organic compounds in the sediments have not yet been successfully completed. The reaction steps required to determine the organic inventory of a sample, and the associated range of chemicals and temperature regimes required, make for an extremely precise and complex undertaking. Such a sequence of steps is only possible on returned samples -and as outlined in Sect. 5, almost all the samples that will be collected from Jezero Crater have the potential to address the question of life on Mars.
Planetary Protection (PP)
The United Nations, through COSPAR (Committee on Space Research), has responsibility for Planetary Protection (PP), both the prevention of contamination of the Earth from potentially harmful agents (backward PP) and prevention of contamination of bodies beyond Earth by terrestrial agents (forward PP). For a sample return mission, both aspects of PP must be considered: forward PP on the collected samples and backward PP during transport, curation and analysis.
There are five planetary protection categories defined by COSPAR; all Earth return missions are Category V and a Mars Sample Return mission is Category Vb (Restricted Earth Return), the requirements for which are given as follows: The absolute prohibition of destructive impact upon return, the need for containment throughout the return phase of all returned hardware which directly contacted the target body or unsterilized material from the body, and the need for containment of any unsterilized sample collected and returned to Earth. Post-mission, there is a need to conduct timely analyses of any unsterilized sample collected and returned to Earth, under strict containment, and using the most sensitive techniques. If any sign of the existence of a non-terrestrial replicating entity is found, the returned sample must remain contained unless treated by an effective sterilizing procedure (COSPAR 2011;Kminek et al. 2017).
Design studies for an MSR Receiving and Curation Facility have examined facilities in which biohazardous material is stored on Earth. The most recent reports are Euro-CARES (2017) and MSPG (2019a, 2019b. The most strictly-controlled of such facilities is BSL-4, where all possible precautions are taken to prevent material escaping (backward PP). Less attention is given to material coming into the laboratory (forward PP). Currently, there is no existing facility that maintains high level containment in both the backwards and forwards direction, hence the need for a specially-designed facility for returned martian samples.
It has sometimes been the case in previous design studies that the requirements for PP have been seen to be in conflict with those of the science goals. It is now recognised that many of the analyses designed to respond to PP questions about the potentially hazardous nature of martian materials are the same as those that would be applied to answering science questions about the presence (or absence) of biological matter and its characteristics. The benefit of the change in approach to both PP and Science goals is that there is likely to be a more efficient and stream-lined analytical sequence for the preliminary investigations, possibly with a more rapid outcome to decision-making about release of samples from containment (MSPG 2019a, 2019b).
Sample Receiving and Curation Facility(ies)
A hugely important part of an MSR campaign is the facility (or facilities) that will support the processing, basic characterization, and eventual allocation of the material on its return to Earth. At present, although final decisions about the return part of the mission have not yet been taken, it is almost certain that the designated landing site of the cannister containing the samples will be in the U.S. This will require a sample retrieval facility to be based in the landing area, as was the case for the Genesis and Stardust missions (McCubbin et al. 2019). Processing and opening of the cannister and all subsequent activities are governed by planetary protection concerns; the chain of what happens and when and where it happens is under current consideration.
Once the samples are returned from Mars, there will be a systematic process to evaluate the type of material that has been recovered. In order to ensure fair access to samples by the community and also to carry out and evaluate the required series of tests for planetary protection purposes, it is likely that there will be an international board established to oversee a preliminary examination of returned Martian samples as well as its subsequent distribution. The structure and governance of the body that will oversee the MSR sample curation, analysis and distribution is under consideration (MSPG 2019c).
The samples will be moved from the landing site to a Sample Receiving Facility (SRF). This will be combined with (indeed, act as) a Sample Curation Facility (SCF). There will only be a single SRF -and if, as is assumed, the sample cannister lands in the U.S., the SRF will be in the U.S. However, there almost certainly will be more than one SCF: one will be part of the SRF and at least one will be in Europe (MSPG 2019a(MSPG , 2019b. For simplicity, in the rest of this section, we refer simply to a Sample Curation Facility (SCF).
Preliminary Analyses
Staff at the SCF would be responsible for sample curation and eventual allocation of samples to the international community; allocation would (presumably) be decided by an interna-tional allocation committee operating under the auspices of an oversight board. There have been many studies outlining how such a facility might operate, what instruments should be installed and the analyses that should be performed. Planetary protection protocols for different areas and processes within the SCF are under active discussion, under the auspices of COSPAR. There are many issues that have to be considered prior to analysis of a sample. The most recent study that has deliberated on potential operations models for an SRF is the MSPG joint ESA-NASA Working Group (MSPG 2019a(MSPG , 2019b. In its reports, the MSPG recognised two phases of examination that the samples would undergo in the first instance: Basic Characterisation (BC) and Preliminary Examination (PE). Both phases would take place under full BSL-4 conditions.
An outline of a possible sequence of events is as follows. BC is non-invasive and nondestructive -it comprises, as a minimum, photography and weighing of the sample tubes. All sample tubes would go through BC on arrival at the SCF. The next phase of investigation, PE, is more detailed, minimally invasive and non-destructive. It is likely to be carried out on a single tube at a time.
There are still huge unresolved issues of how BC and PE would take place: one particular concern is the dust that will cover the tubes -it is likely to permeate the cannister and will have to be collected at different stages of the cannister and sample tube opening procedures. Other issues include: (i) how the headspace gases above the tubes should be extracted and what, if anything, the tubes should be re-pressurised with; (ii) the desirability (or otherwise) of CT scanning of the tubes prior to removal of the sample; (iii) if, when and how material should be sterilised prior to distribution and (iv) how much sample will be assigned for PP testing and how will it be selected, etc. It is clear that whatever instruments and techniques are selected to perform the BC and PE analyses, they must generate sufficient relevant information to characterise the samples, enabling decision-making during the subsequent allocation process. Most of these considerations are detailed in MSPG 2019a, 2019b, and so will not be covered here.
Instrumentation
The international community has made several different attempts to determine the type of instrumentation and analysis that will be required for complete characterisation of the return samples. The required analyses can be divided up into the following categories: (i) morphology, structure and texture; (ii) mineralogy and mineral chemistry; (iii) organic components and (iv) isotopic composition. As yet undetermined is the split between which instruments and analyses would be carried out within the SCF as part of BC and PE, which analyses have to be undertaken quickly because of time-dependent considerations (e.g., disequilibrium reactions involving water activity once a tube is opened) and which analyses might be carried out on sterilised samples. MSPG (2019a) summarizes recent community consideration of this topic.
The following table is a compilation of the instruments regarded by the Euro-CARES consortium as being appropriate for installation within the SCF. The Table is divided on the basis of the examination stage that a sample is undergoing. It is probably redundant to repeat that no decisions have yet been reached as to which equipment or instruments would be required. It is also almost certain that samples would be processed within the SCF for allocation outside the SCF to individual investigators or groups with specialist instruments that will acquire data beyond that acquired within the SRF by the instruments listed in Table 2. Again, the issue of analytical instrumentation is discussed further in the iMOST and MPSG reports (Beaty et al. 2019;MSPG 2019aMSPG , 2019bMSPG , 2019c.
Summary
Decades of observation of Mars by fly-by, orbiting, landed and roving spacecraft, complemented by data from ground-and space-based telescopes and martian meteorites, have resulted in an enormous dataset of information. This has enabled a detailed picture of the planet to be built up. We have been able to infer relative chronologies for Mars' formation and its atmospheric, fluvial and volcanic histories. We have evaluated the likelihood (or otherwise) of the evolution and survivability of life and the suitability of Mars as a target for human exploration. But despite these advances, we do not know the age of Mars' crust or mantle, or when water flowed across the surface, or when spectacular volcanic eruptions took place. We can only obtain such information by returning samples from Mars to laboratories on Earth. Only then will detailed studies, carried out on carefully-prepared, highly-specific components of the martian samples complemented by high resolution, high magnification imagery, deliver the absolute chronology of Mars. The return of samples from Mars is an essential next step in understanding the origin and evolution of our neighbouring planet. It is also a critical requirement for the safe return of future martian astronauts.
X-Ray Diffraction (XRD) Mineralogical
Note: Many of the instruments are duplicated because they are used for several purposes. The list does not imply (unless specifically recognised) that physical duplicates of the instruments are required | 10,551.8 | 2020-05-06T00:00:00.000 | [
"Geology",
"Physics"
] |
A New Model to Collect Customer’s Information from In-App-Purchases in Mobile Games
— Engaging customer is one of many important keys to keep an organization running. When an organization can analyze and satisfy the customer needs, they will know what to do in the future. Collecting information about customers from customer’s phone activity is a great strategy to manage the customer engagement, especially in mobile game industry. 15 percent of smartphone users have made an in-app purchase as of March 2014. This paper will explain how to get information about customers using a simple script. The script will embed in the games directly and will send information to the game server; the data will be collected and will be analyzed later. This data will be very useful to know which country has a high transaction or have 50% or more of Daily Active Users is a Paying Active Users. The company can set a promotion or other business strategy based on obtained customers information.
I. INTRODUCTION
Understanding customer experience can enhance the relationship between customer and organization, building customer loyalty, and increase the economic value of organization [1]. Purchasing a product or service is one form of customer experience that represents the mutualism relationship between customer and organization. By analyzing the customer's information (e.g., location, behavior) from any transaction done by customers, an organization can decide to propose a business strategy to satisfy their customer. Customer satisfaction should become a fundamental target of business practice [2].
The mobile application had been developed by many people. For example, mobile apps for psychology [3], leadership [4], health [5] and the most of them are for mobile games. The game itself can be used as an alternative to the educational process. Research [6] proves that an educational game able to accelerate students' performance.
In recent years, many researchers already use a mobile game as their main research topic. For example, research [7] tells about the detailed process of how to creating a mobile game. The idea of the game itself is coming from the traditional game.
A mobile game is a potential market generating revenues [8]. In this day, gaming is a new business opportunity which gives a good amount of money [9]. Per research conducted by B2C in 2014, the revenue gained from mobile app industry (including in-app purchases) is growing from $53 billion in 2012 to $143 billion in 2016 [10]. Statistic report that 15 percent of smartphone users have made an in-app purchase as of March 2014, and the revenue generated from in-app purchase approximately 7.8 billion US dollars. By accessing the in-app store, a customer can purchase any virtual goods or services to satisfy their social needs such as prestige, status, uniqueness, conformity, and self-expression [11]. In-app-purchase is also a huge opportunity for an organization to get in touch with their customer [12].
This paper tells about a new method for collecting the information about a customer from in-app-purchase in mobile games by embedding a simple algorithm into the application. That algorithm is developed to gather the information about customer whenever they make an in-apppurchase. The information gathered is user's device location and how many times a user already made a successful purchase. By analyzing that information, there is a chance that information will benefit both organization and customer. The organization can create a new business strategy (e.g., customer segmentation [13], promotion, etc.) to develop customer loyalty, experience, and satisfaction.
The rest of this paper is structured as follows. Section 2 will discuss some theory and previous works which related to this paper. The theory that will be discussed is what is mobile games, in-app-purchase, and how location-based application can be a new way of enhancing customers. Section 3 is a discussion of the new model which will be used in this paper. The model itself has two methods, and both will be described very details. Moreover, Section 3 will tell of how to analyze the data which already being collected before. The result of the research will be described it in this section. Finally, Section 4 will tell about the conclusion and about future works.
II. MATERIAL AND METHOD
Before talking about anything about our new model, it's better to know the theory or any material about this research topic. There are many types of research that already being concluded before, and some of them have high relevance with this paper.
Research [14] conduct studies to collect information through devices. To collect the data, they study some participants to simulate a real-world deployment. The participants must install a tool app, simulate, report their daily mood, and use a sleep tracking app. After that, they conducted a short semi-structured interview and asked the participants about their general experiences of using the app. They found that information collection which requires users to manually enter their data is not going to be accepted by many users unless there is a strong motivation or reward for it. This research also recommends providing a clean and open data usage policy to users that enable them to choose whether they want to use the system and remove manual user interactions where possible.
Research [15] conducts an analytical approach to measure the effectiveness of location-targeted mobile advertising. They obtained data from an Asian mobile service company that has a partnership with cinemas and sell movies ticket via mobile phones. The company promoted their movies through the location-targeted advertising, consumer behavior, and company's website. This research indicated that the mobile advertising with behavior targeting had the highest impact on company sales. This research uses the records provided by the company to divide the company's customer into two segments, "low interest" and "high interest". The result is, this research concludes that the location-targeted advertising is very effective to "high interest" group.
There is a framework that leads to direct marketing called "recency-frequency-monetary" value (RFM). A. Asllani and D. Halstead (2015) proposed a model to help organizations to design their direct marketing campaigns and introduce personalized promotions for customers. The methodology proposed was based on a goal programming (GP) by using RFM data to maximize profit gains for direct marketers. Using RFM data involves choosing customer based on customer's last purchase (recency), purchase frequency (frequency), and the value they spent (monetary value). Goal programming tries to make some models formulation for recency, frequency, and monetary value in RFM and solve it. This research has limitations, e.g., the ability of RFM framework to accurately predict future behavior and profit is still questioned because the prediction is based on historical data [16].
One of the purposes of the new method is customer engagement. An investigation has been conducted to engage customers through gamification [17]. This paper explains that gamification is not the same as other online strategies, it introduces a component of the competition. The customer engagement behaviors are described into how customers attempt to complete their tasks in-game, achievement and reward systems, an interaction between customer and customer or customer and firm. This paper also describes the outcomes given from customer engagement, such as creating a community and gaining customer loyalty. This paper finds the key components of gamification mechanism, including challenge, task and completion, achievement and rewards and win condition. Authors find that in-app purchase is one of the solutions to achieve those key components of gamification to build the customer engagement.
For the theory part, there is three-part that needed to be discussed to understand the basis of this paper. They are mobile games, in-app purchases, and location-based applications.
A. Mobile Games
The presence of smartphones which have many computer services [18], makes everyone wants to have it. This fact will give an opportunity to anyone who wants create an application or game and sell it to Application Market. This trends still hard to stop and always growing each day.
In 2015, gaming industry got around $111 billion for selling any kind of games, and one of them is a mobile game [19].With that many incomes, naturally many companies try to adapt this trends to their business.
There are three kinds of how to get income from application or games that installed in users [20]. First is users need to pay when they try to install it on their device which called "Paid apps". Second is "Ads system", users will get an advertisement when they installed the application for free [21]. The third is "In-App-Purchases", users can buy some items inside the games and that items can be used only in the games.
B. In-App Purchases
In-App Purchases is one of three-way for monetizing games. By looking at two big games market (App Store from Apple Inc. and Google Play from Google), we can find many kinds of games that can be downloaded for free. This kind of games is not 100% free because in some games there are In-App Purchases within it.
There are four types of in-app purchase items in games: Non-Consumables, Consumable, Subscription, and Auto-Renewing Subscription.
Non-Consumables is an in-app purchase type that user can buy once and that items will be permanently in user's inventory.
Consumable is an in-app purchase type that can be used by the user, and it will not remain permanently in user's inventory.
Subscription is an in-app purchase type that needs users to pay to gain access (for a limited time) to the game if they still want to play the game.
Auto-Renewing type much like Subscription type but instead of a device based, it's more like user's base. Users who pay this kind of in-app purchase can transfer it between two or more devices.
C. Location-Based Application
In recent years, there is three operating system (OS) which hold almost all device market in the world. Device OS had a similar function with computer OS. They need to be there because the OS is the one have full control of every sensors and hardware on the device. Also, the OS will act as one the bridge for the application and the hardware or sensors of the device. These three OS are Android from Google, iOS from Apple, and Windows Mobile from Windows.
Talking about a location-based application, in the deviceespecially smartphones -there is one sensor which related to the device location. That sensor is Global Positioning System (GPS) sensor. This sensor will allow OS to know where is the device location from the help of satellite which is orbiting our earth. The device locations described to OS by their latitude or longitude position. As usual, the better GPS sensors means, the better latitude or longitude accuracy of where the device is.
When an application needs to use a GPS sensor, normally it will ask for user permission. There is a chance that the application will not be able to use the GPS sensor. There is one another way to get device location. The difference from using a GPS sensor, getting device country or region didn't need any special permission. Getting device country or region requires no special device sensor to be used. It's just need take the country or region from what user already set before. Usually, users will set it when they buy the device and run it for the first time. The OS will automatically ask them to set the region or country of the device. In case user skips this process, the country or region of the device still can be obtained by detecting from device's subscriber identification module card or usually called as SIM Card.
In technical perspective, Android, iOS, or Windows Mobile provides an Application Programming Interface (API) that allows a developer to get a device region or device location. There is another research which tells us that this API not only makes the developer easier to use it but it also helps to solve a problem [22], [23].
From business point of view, this API gave an opportunity to the company to analyze which country had a high rate, medium rate, or even low rate of purchasing items trends. This opportunity can be used in games too, especially when the games have in-app purchase in it. The game company can get a very meaningful data from any users, not only when they a successful purchase, but when they start the game, their location, region, or country will able to be collected. The new method benefits mobile game organization to enhance the customer engagement and tightens the relationship with customers. It also gives the organizations opportunities to increase their customer satisfaction, which led to customer loyalty.
III. RESULT AND DISCUSSION
Before explaining the new method, it is important to know the basic concept of how mobile games able to interact with game company servers. The detail of this concept will be presented in Fig. 1. There are three main components of these concepts. These three components are very important for each other although they have a different role. These three components must be available to make this concept working as expected. These three components are user's device, an internet connection, and a server. User's device is where the games are installed and played by the user. User device will have direct contact with the user and thus makes user device will have a huge information related to that user. For a game company, that kind of information is valuable resources. As explained before, it can be one factor to be considered for maintaining user loyalty to play their games.
The second component is an internet. Internet act as a bridge between both user devices and game servers. Every data which transmitted or received to user devices will always go through this connection. Without an internet connection, it's almost impossible for user devices to give or get any information from game servers. As bad as for user device, without an internet connection, a game server can't transmit or receive any information from user's device. It makes game server existence would be useless.
The third component that needs to be used is the game server itself. Usually, a game company will have a dedicated game server to save any information related to their games. Therefore, on the game server will have a database server which installed in it. Using database is useful because it will help the game company managing their data. They can do searching, saving, editing, or even deleting data easily. It makes game company able get a valuable information by doing it.
Understanding this concept would be important because in the next chapter of this paper will always use this concept as a principal to create the new method.
Hereafter, this paper will describe what kind of data that transmitted or received in user devices. The new method will describe a new model of how to get a customer behavior when purchasing items or services from in-app purchase feature, especially from mobile games. It will use only one simple script that runs every time the game is open or any transaction being run. Moreover, this paper will tell of how saving the data process is and how to get the information from the data itself.
The data which is transmitted or received from user devices is information about device location and how many times user do successful purchase from games. Basically, our new method is how to get, how to save, and how to process the data so that kind of data can be useful information for the game company. Fig. 2 shows the workflow of the new method of how to get the device location from games. There are several important steps to know how to get an information on customer's reaction about In-App Purchase items. That steps are, getting device location, the method to send and receive user's location on the server, and detecting user's location every time they made a successful purchase. All these steps will be described in detail later.
A. Get Device Location
Getting device's location is a simple task to do in programmatic point of view. Basically, there are two kinds of the location which able to get from user's device. One gets a device location based on (GPS) location, and the other one gets a device country or region from device settings. Android, iOS, or Windows Mobile platform already prepare an API to do both. Table I is the list of prepared API from every platform. All of them is a class which has function or member that related to the device location. Every games developer can be easily used it and get how to use it by visiting the documentation of the API itself.
Apple as a creator iOS named the API to get the device GPS location with CLLocationManager class and for getting device region or country with NSLocaleCountryCode class. Same as what Apple did, Google as a creator of Android also prepared an API to do that. They called it LocationManager class to get the device GPS location and use TelephonyManager class to get the device country or region. Windows which created Windows Mobile also create an API to do that. Geolocator class will be used to get the device GPS location and System.Globalization.RegionInfo to get the device region or device country.
The first method to get a device location is by using GPS sensor. When a game needs to use device's GPS sensor, it will ask a special permission from the user to enable it. After the GPS sensors had been enabled, then the API can work properly. Fig. 3 Pseudocode of getting device location Fig. 3 is a pseudocode of how to get device location from GPS in the device. InitPreparedAPI() is a function of the API which allows the device operating system to prepare and make sure the GPS works well. Usually, this method will return a Boolean that tells the GPS ready to be used or not. The next step is getting the device GPS location. The API will give a data in longitude degree and latitude degree of the device. The data will be saved in some variable, and later this variable will be a return value of the function. The last step is calling a ClosePreparedAPI() function. This function used for closing the connection between the games and the GPS. This is needed to be done because the GPS will not in a working state all the time. By doing this, the games will not drain the device battery.
Using GPS to get device location will guarantee to get an accurate location, but there are several limitations in getting an exact device location based on GPS. As mentioned before, using device's GPS needs special permission from users. Sometimes, the user will not allow some games to use the GPS, that will make our method to get device location will be hard to implement. The other problem is sometimes users don't want to turn on their GPS. The alternative way to get device location is by using the device region instead of using GPS.
The alternative method is much easier to implement compared to getting device GPS location. It doesn't need any special permission or any sensors to be turned on. It just obtains the information about devices country or device region set by users. Fig. 4 Pseudocode of getting device country or device region Fig. 4 is a pseudocode of how to get device country or device region. It is less complicated than getting device GPS location because this method doesn't need another sensor to get the data needed. To implement it, simply call getCountryRegion() method from the API and save it the ... } variable. Later, by calling the variable, the game will know the country or the region of the device.
Compared to the method before, this method will guarantee the data will always available and ready to take at any time. When a user buys the device, the device will always set the country or region either user set it manually, or the device will read it from the subscriber identification module card or usually called a SIM Card of the device.
Both of those methods have an advantage and a disadvantage as well. It is a good way to implement both in the mobile games. If the user disabled the GPS or don't want to give the permission to use GPS, we still get a device region or device country anyway. So, our new method is still possible to be executed.
B. Send the Information to Server
After the device location is obtained, then send the location data that we get earlier to the server. The simplest way to do it is by using JavaScript Object Notation (JSON). JSON is a very lightweight data interchange format which easier to understand by both human and computers. JSON just a text which has curly brackets "{}" as an object of things and JSON also use a square bracket "[]" as an array of objects.
The fact that JSON is a lightweight data-interchange format makes it a perfect data format to be used. It will consume a light bandwidth, and it did not cost much in user's point of view. Our new method will use JSON as data format when sending a location data and receive a response from the server. 5 shows the example of data in JSON and how data is sent to the server. The data will be an object of locations which have longitude, latitude, and a device country as a member. Longitude and latitude position will be served from GPS location, and the other one is from device's country. This data will be recorded and being analyzed in the last step of our new method.
C. In-App Purchases
One more thing to be analyzed is when the user makes a successful transaction from in-app purchase. It is very important to know the intention of in-app purchases in each country. That is valuable information for a mobile games company to prepare a business strategy in the future. Fig. 6 showing the process of how the user is buying an item from an in-app purchase of a game. There are five steps from the start until the information of the user is saved on the server. This process started with user select which item they want to buy. The game will process it and shows the user how much they need to pay. The store will ask the user to choose their payment method, and then this purchase confirmation will be sent to game servers. 6 Step by Step How User Buying an In-App Purchases Next step is validating the purchase request on the game servers. In the game servers, the purchase confirmation from users will be validated. They will be compared with the data from Store. There are two possibilities when this process is concluded, if they are not matched, then the purchase will be voided, and the game server will not send the item to the users. When they are matched, then this process will go to the next step.
The next step is the most important in our new method because this step is usually being forgotten by some game company to do. Every successful or even unsuccessful purchase by the user must be recorded. It is very important because, from that information alone, the game company can analyze what to do in the future. This step called a confirmation step.
The confirmation step is the last step of how buying an item from in-app purchase. In this step, game server and user device will do their last confirmation step before the user gets their item. If game server detected that the purchasing request from the user is valid, then the game server will send a JSON that tells the purchase process is successful. The example of the JSON can be seen in Fig. 7. While sending the JSON to user's device, the game server also sends the purchased item. User device will response it by updating user's game with the received item. Also, the game will update their user device information. Basically, the code is pretty like get a location or a country/region device. In this part, the game will add a new member in JSON that already sent before. The JSON object will be the same as Fig. 5, but this part needs to add "success_purchase" as a new member. The new JSON user device information can be seen in Fig. 8. "longitude":"113.9213E", "latitude":"0.7893S", "country":"ID", "success_purchase":1 } A "success_purchase" is the purchases frequency made by users in in-app purchase. It constantly increases every time user makes a purchase.
D. Analyzing Collected Data
The last step of our method is to analyze every country which already registered in the server. Just simply count every country which comes up to the server and writes it to the table. Fig. 9 shows the data stored on the server. The server will receive all the JSON details of every device location. After getting it, the server will compile the data by merging all of it into one JSON. They will be separated by a comma (,) and makes the JSON looks like an array. By doing it, it will be easier to a game company to work with it.
This example shows there are ten devices that recorded on the server. The devices are having ten different countries and different amount of "success_purchase" variable too. In some cases, "success_purchase" never appear in the data, like in the data number five. The users who known located in Vietnam didn't have variable "success_purchase" on the server. That means this user is never making any successful purchase before while he still playing the game regularly.
By looking through of all country that appears on the JSON, the game company able to know how many players are installing and playing the game. More of that, the game company will also know how many users are a Paying Active Users (PAU) or just a Daily Active Users (DAU). DAU is users who install the games, using it regularly, but never buy any in-app items in the game. PAU is users who install the games, play it regularly, and also have made an in-app purchase at least once. Table II is an example of how the collected data are presented. The users will be counted one by one until finding the exact number of how many users are installing and playing the game in every country. After that, the users in every country will be divided into two groups, which is DAU and PAU.
Knowing how many DAU and PAU in each country can be a key factor to consider which country will get a promo, special event, or anything that can raise user interest that will lead to user engagement.
PAU Percentage=
Getting PAU percentage from all DAU can be obtained using equation (1). It can be obtained by dividing the total amount of PAU with the total number of DAU. Then multiply it with 100, and the result is a percentage of PAU in the specified country. This percentage of PAU can be a crucial factor as this value can be determined of how the game company should do in the future. Detailed of how the company should do with this value will be explained later.
This chart will help the game company to decide what to do about their game to maintain or increase the PAU from all the DAU. Ideally, more PAU in all DAU is better because it means the game gives more income for the company.
IV. CONCLUSIONS
With our new method to get region or location from any smartphones device will help any games company to determine which country had a high rate, medium rate, or low rate of purchasing in-app purchase. It will give the games company opportunity to maintain the loyalty of their customer. They can do it by making a promo in the selected country, making some event in games by location, and anything to get customer loyalty. The new method also can be used as a tool to enhance customer engagement for organizations.
For future works, instead of user's location only, user's time purchasing, a user's purchasing behavior, or highest selling items will be considerate as a new variable to decide for gaining the loyalty of customers.
Moreover, because our new method relies on internet connection, it will be a good idea to maintain the security of it. This is also can be an interesting topic to be discussed in the future. | 6,643 | 2017-12-28T00:00:00.000 | [
"Computer Science",
"Business"
] |
Cell Growth Inhibition Effect of DsiRNA Vectorised by Pectin-Coated Chitosan-Graphene Oxide Nanocomposites as Potential Therapy for Colon Cancer
Colonic-targeted drug delivery system is widely explored to combat colon-related diseases such as colon cancer. Dicer-substrate small interfering RNA (DsiRNA) has been explored for cancer therapy due to its potency in targeting specific gene of interest. However, its application is limited by rapid degradation and poor cellular uptake. To address this, chitosan-graphene oxide (CSGO) nanocomposite was used to deliver DsiRNA effectively into cells. Additionally, pectin was used as compatibilization agent to allow specific delivery to the colon and protect the nanocomposites from the harsh environment in the stomach and small intestine. CS-GO-DsiRNA nanocomposites were prepared by electrostatic interaction between CS and GO prior to coating with pectin. The mean particle size of CS-GO-DsiRNA-pectin nanocomposites was 554.5 ± 124.6 nm with PDI and zeta potential of 0.47 ± 0.19 and −10.7 ± 3.0mV, respectively. TEM analysis revealed smooth and spherical shape of CS-GO-DsiRNA nanocomposites and the shape became irregular after pectin coating. FTIR analysis further confirmed the successful formation of CS-GO-DsiRNApectin nanocomposites. Furthermore, the nanocomposites were able to entrap high amount of DsiRNA (% entrapment efficiency of 92.6 ± 3.9%) with strong binding efficiency. CS-GO-DsiRNA-pectin nanocomposites also selectively inhibited cell growth of colon cancer cell line (Caco-2 cells) and were able to decrease VEGF level significantly. In a nutshell, pectin-coated DsiRNA-loaded CS-GO nanocomposites were successfully developed and they have a great potential to deliver DsiRNA to the colon effectively.
Introduction
Colorectal cancer is ranked as the third most common cancer worldwide and it contributes to major cause of death in Western country [1].Conventional treatments of colon cancer such as chemotherapy and radiotherapy have considerable drawbacks as they could not specifically target cancer cells and might cause injury to healthy cells [2].Besides, patients on conventional treatment often develop tolerance to targeted therapy by gene mutation.Gene therapy has gained enormous interests recently as it can be used to treat wide range of diseases.An effective treatment approach by producing siRNA-based drug may target specific mRNAs despite of their cellular locations or structures of translated protein [3].Kulisch et al. [4] reported that Dicer-substrate siRNA (DsiRNA) displays excellent potency in gene silencing and is able to silence gene for longer time compared to siRNA.DsiRNA has several advantages compared to standard 21-mer siRNAs.The advantages include better selectivity of the guide strand as a consequence of Dicer processing and handoff to RNA-induced silencing complex (RISC) as well as higher potency attributed to lower effective concentration needed [5].
RNAi technology was reported to provide an alternative strategy in treating cancer by inhibiting overexpressed oncogenes, blocking cell division by interfering with related genes, or promoting apoptosis by suppressing antiapoptotic genes [6].Vascular endothelial growth factor (VEGF) is an important angiogenic factor associated with tumor growth and metastasis [7].According to Ahluwalia et al. [8], normal colonic epithelial cells do not produce VEGF or express its receptors.However, in colon cancer cells, VEGF and its receptors are expressed and it promotes colon cancer cell proliferation directly.Therefore, DsiRNA targeted against VEGF gene was used to kill cancer cells by silencing overexpression of its protein [6].
In a strategy to activate RNAi pathway, siRNA molecules need to be delivered to the interior of target cells and to be incorporated into the RNAi machinery.However, siRNAs cannot cross the lipid bilayers of the cell membrane readily as they have anionic backbone and hydrophilic properties [9].Besides, siRNAs are facing some other obstacles to reach their site of action.The obstacles include undergoing degradation by nucleases, short half-life due to the uptake by mononuclear phagocyte system, off-target effect, and rapid renal clearance [10].As a result of these obstacles, the need to develop carriers for siRNA is paramount to deliver it to the site of interest efficiently.An ideal delivery carrier for siRNA should possess properties such as being nontoxic and nonimmunogenic, being able to condense siRNA efficiently, protecting the integrity of its content before reaching the target site, having the ability to evade rapid elimination from blood circulation, and internalizing and dissociating in intracellular compartments of the target cells to release the adequate amount of siRNA, thereby exposing siRNA to mRNA [11].
Nanomedicine-based carriers have been extensively explored as potential candidates to guide siRNAs directly to the target cells.Nanocarriers offer several advantages that include providing sustained release and prolonging the time of siRNA at the target site.Nanocarriers also are able to penetrate capillaries; hence, they can accumulate at the site of interest.The nanotechnology approach could also protect siRNA against RNase degradation and avoid offtarget effects [12].In nanomedicine, polycationic polymers such as chitosan (CS) were used to condense siRNA into nanoparticles to facilitate cellular uptake [13].
CS in combination with pectin has been widely explored for their benefit as colon-specific drug delivery system [14].Apart from being biocompatible, less toxic, nonimmunogenic, and degradable by enzymes [15], CS can also be used in gene delivery system.Tremendous studies on graphene oxide (GO) have made it the gold approach in biomedicine applications.GO is a highly oxidised graphene form with oxygen functional groups on its surface.GO has been combined with CS to form a nanocarrier with better aqueous solubility and biocompatibility.In addition, it can exhibit powerful loading capacity besides having the ability to condense plasmid DNA into stable, nanosized complexes [16].In an attempt to ensure that CS-GO-DsiRNA nanocomposites would be delivered to the target site successfully, pectin was used to coat the nanocomposites due to its unique property that will be degraded by colonic microflora.This property enables it to be used as a specific drug carrier to the colon [17].Besides, it also displays antitumor activity [18] that can further enhance growth inhibition or cytotoxic effect in colon cancer cells.
In this study, water-soluble CS was synthesised from low molecular weight (LMW) CS to enhance its solubility, while GO was synthesised using Hummer's method from graphite flakes.Oxidation of graphite using oxidising agent will increase interplanar spacing between graphite layers, producing GO [19].Later, DsiRNA was adsorbed onto CS-GO nanocomposites.Pectin was utilised as coating agent to hinder premature DsiRNA release in gastric and intestinal fluids.To confirm successful synthesis of CS-GO-DsiRNApectin nanocomposites and their ability to load DsiRNA, physical characterization, structural analysis, and in vitro drug release study were conducted.The nanocomposites were also tested for their in vitro cytotoxic effect in normal and cancerous cell lines.Finally, silencing of VEGF gene by DsiRNA was determined using ELISA.
Synthesis of
Water-Soluble CS. 1 g of LMW CS was dissolved in 2% v/v of acetic acid using a magnetic stirrer.The CS solution was adjusted to pH 4.5 by adding few drops of NaOH and left overnight.A 20% w/v hemicellulase enzyme solution was added to CS solution and the mixture was left in the water bath at 40 ∘ C for 6 h.pH of the mixture was then adjusted to 5.5 by adding few drops of NaOH solution.After that, the mixture was boiled for 10 min to denature the enzyme.The upper layer of solution containing enzyme was removed manually by using a spatula until a concentrated solution formed.Then, the solution was frozen at −80 ∘ C and it was lyophilised using a freeze dryer (ScanVac CoolSafe Freeze Drier, Lynge, Denmark) at −110 ∘ C for one day to obtain water-soluble CS.It was then crushed into powdered form by using mortar and pestle.The functional groups of water-soluble CS were identified by using Fourier-Transform Infrared Spectrophotometer (FTIR) (Perkin-Elmer Spectrum 100, Waltham, USA). 3 mg of water-soluble CS and 300 mg of dried IR-grade potassium bromide (previously heated for 2 h at 105 ∘ C) were ground into powder to produce a thin disc.The sample was scanned at frequency range of 4000 to 400 cm −1 .
Synthesis of GO Using Hummer's
Method.An amount of 110 mg graphite and an amount of 55 mg NaNO 3 were mixed in 6 mL concentrated H 2 SO 4 and the mixture was stirred overnight using a magnetic stirrer.An amount of 300 mg KMNO 4 was added and left overnight, while the conical flask was being surrounded by ice bath.Another 300 mg of KMNO 4 was added and left overnight on the next day.Ice bath was removed and it was replaced with water bath at 40 ∘ C overnight.Later, 7 mL of deionised water was added and heated in water bath at 98 ∘ C for 2 h until yellow-brownish solution was formed.7 mL of 30% H 2 O 2 was added and the mixture was left for 1 h at 98 ∘ C.After that, the mixture was left to cool down to room temperature and poured into centrifuge tube.The supernatant was removed and the pellet formed was resuspended with 10% HCl and deionised water several times.Then, it was poured into dialysis bag for 5-7 days.The solution was frozen at −80 ∘ C before it was lyophilised using a freeze drier (ScanVac CoolSafe Freeze Drier, Lynge, Denmark) at −110 ∘ C for one day.The functional groups of GO were identified using FTIR (Perkin-Elmer Spectrum 100, Waltham, USA) as mentioned above.
Preparation of DsiRNA-Loaded CS-GO Nanocomposites.
Water-soluble CS was dissolved in distilled water and stirred to produce 0.1% w/v of its solution.GO 0.25% w/v was added to CS solution separately and it was mixed continuously for 12 h using a magnetic stirrer.The resultant mixture was degassed in a sonicator bath for 30 min.DsiRNA-loaded CS-GO nanocomposites were prepared by adding 500 L of the nanocomposites solution to an equal volume of DsiRNA solution (15 g/mL) in deionised water.The mixture was quickly mixed by inverting the reaction tube up and down for a few seconds and incubating for 30 min at room temperature before further analysis.
Preparation of Pectin-Coated CS-GO Nanocomposites.
Pectin powder was dissolved in deionised water at 70 ∘ C under magnetic stirring to produce 0.1% w/v pectin solution.10 L of pectin solution was added to the DsiRNA-loaded CS-GO nanocomposites and the mixture was vortexed.The coated formulation was harvested by centrifugation (UNIVERSAL 320R Benchtop Centrifuge (Hettich Centrifuges, UK)) at 13000 rpm for 1 h at 10 ∘ C. The pellets were resuspended in deionised water.
Determination of Particle Size, PDI, and Zeta Potential.
A volume of 1 mL nanocomposites suspension of CS-GO, CS-GO-DsiRNA, and pectin-coated CS-GO-DsiRNA was placed separately in a glass cuvette using a pipette prior to analysis.No dilution was done during the analysis.The particle size (-average), polydispersity index (PDI), and zeta potential of the nanocomposites were characterised in triplicate using Zetasizer5 Nano ZS (Malvern Instruments, UK).The measurements were performed at 25 ∘ C with a detection angle of 90 ∘ .All data are expressed as the mean ± standard deviation (SD).
Determination of DsiRNA Entrapment Efficiency and
Nanocomposites Yield.DsiRNA entrapment efficiency was obtained from determination of free DsiRNA concentration in supernatant recovered from centrifugation process using a UV-1061 UV-Visible Spectrophotometer (Shimadzu, Japan).Unloaded nanocomposites formulation was used as blank.Concentration of free DsiRNA was determined using Beer's Law ( 260 = ) and calculations were done using the following equation: where is the concentration of DsiRNA, 260 is the extinction coefficient, and is the path length of the cuvette.Extinction coefficient of DsiRNA is 518500 L −1 ⋅cm −1 .The sample was measured in triplicate.The entrapment efficiency was calculated by using the following equation: The yield of nanocomposites was calculated by using the following equation: = mass of DsiRNA − nanocomposites total mass of composites and DsiRNA added × 100. (3)
Determination of DsiRNA Binding Efficiency.
The binding efficiency of DsiRNA to the nanocomposites was determined by using E-Gel5 4% agarose (GP) stained with ethidium bromide (Invitrogen, Israel).20 L samples of nanocomposites were loaded into respective wells of the gel.Naked DsiRNA was used as positive control, while 10 bp DNA ladder was used as size reference.Electrophoresis was run for was then visualised by using a real-time UV transilluminator (Invitrogen, USA).
Morphological Analysis.
Morphological analysis of pectin-enveloped CS-GO nanocomposites was characterised by using a Transmission Electron Microscopy (TEM, Tecnai Spirit, FEI, Eindhoven, Netherlands).A drop of sample was placed on the copper microgrid that was stained by uranyl acetate and evaporated at room temperature (25 ± 2 ∘ C).It was viewed under a TEM for imaging of samples at different magnifying scales.
ATR-FTIR Spectroscopic Analysis. The Attenuated Total
Reflectance FTIR spectra of pectin, CS-GO, and CS-GOpectin nanocomposites were recorded against the background by using a universal ATR sampling assembly (Perkin-Elmer Spectrum 100, Waltham, USA).For each sample, 16 scans were obtained at a resolution of 4 cm −1 in the range of 4000 to 400 cm −1 .Meanwhile, LMW CS, water-soluble CS, and GO powder were characterised by using FTIR Spectrophotometer (Perkin-Elmer Spectrum 100, Waltham, USA).
In Vitro Release Study.
In vitro drug release study was performed in the simulated gastrointestinal (GI) condition; simulated gastric, intestinal, and colonic fluid (SGF, SIF, and SCF, resp.).Formulations containing 0.25 wt% GO coated with 0.1% and 0.2% pectin were used to determine the influence of pectin concentration on DsiRNA release.SGF was prepared by dissolving 2 g of NaCl in sufficient amount of distilled water.7 mL of 0.1 N HCl was added to the solution and the volume was made up to 1000 mL using distilled water.The pH was finally adjusted to 1.2 using 0.1 N HCl or 0.1 N NaOH.Meanwhile, SIF was prepared by mixing solution containing 6.8 g of potassium dihydrogen phosphate in 250 mL distilled water with 72 mL of 0.2 M NaOH.Distilled water was added to make up to 1000 mL solution and the pH was adjusted to 6.8 by using 0.1 N HCl or 0.2 M NaOH.The SCF was prepared by mixing 250 mL of 0.2 M dipotassium hydrogen phosphate with 28.5 mL of 0.2 M NaOH and distilled water was added to make final volume of 1000 mL.The pH of the solution was adjusted to 7.2 using 0.1 N HCl or 0.2 M NaOH.For the first 2 h, the in vitro release study was conducted in SGF (pH 1.2) to follow the average gastric emptying time.The medium was then replaced with SIF at pH 6.8 (with addition of few drops of NaOH).The study was continued for another 3 h.After that, the medium was replaced with SCF at pH 7.2 (with addition of few drops of NaOH) in the presence of 0.3 mL pectinolytic enzyme and the study was extended for another 3 h.Each formulation (300 L) was added to a dialysis tubing cellulose membrane (Sigma-Aldrich) and immersed in 50 mL of simulated fluids in a beaker.The beakers were placed in a shaker water bath (J.P Selecta) (37 ± 0.2 ∘ C) with horizontal shaking of 25 rpm.At predetermined time intervals, 4 mL of the simulated fluid was withdrawn and the concentration of DsiRNA released was measured using UV-Vis Spectrophotometry at 260 nm.
To prevent sink condition, 4 mL of fresh medium was added to the beaker to replace the withdrawn fluid.The percentage of DsiRNA released was determined by using the following equation: ).When the cells achieved 70% confluency, they were then dissociated using 1 mL of 2.5 g/L-Trypsin/1 mmol/L-EDTA solution and incubated for 3-5 min at humidified 5% CO 2 /95% air atmosphere.After trypsinization, the cells were suspended in 1 mL fresh medium and centrifuged for 3 min at 1000 g.Then, the supernatant was discarded and the cell pellet was subsequently resuspended in 1 mL complete medium.Suspension density was measured using a haemocytometer.Once the cell density reached 1 × 10 5 /mL, 100 L of suspension was pipetted into each well.The cells were incubated for 24 h at 37 ∘ C with 5% CO 2 .After 24 h, the medium was changed to serum-free medium and Caco-2 colon cancer cells were exposed to pectin-enveloped CS-GO nanocomposites, DsiRNA-loaded CS-GO nanocomposites, pectin-enveloped DsiRNA-loaded CS-GO nanocomposites, and naked DsiRNA for 24 h at humidified 5% CO 2 /95% air atmosphere.At the end of the exposure time, the culture media were collected and centrifuged to remove cell debris.The cell culture supernatants were used in ELISA assay.100 L of each standard and cell culture supernatants were added to human EG-VEGF antibody-coated ELISA plate.The wells were covered and incubated in room temperature for 2.5 h with gentle shaking.The solution was discarded and washed with 300 L of 1x wash solution for each well for 4 times.After the last wash, the 96-well plate was inverted and blotted against clean paper towels.100 L of biotinylated human EG-VEGF detection antibody was added to each well and incubated for 1 h at room temperature with gentle shaking.The solution was discarded and the washing step was repeated.Next, 100 L of HRP-Streptavidin solution was added to each well and incubated for 45 min at room temperature with gentle shaking.The solution was discarded and the washing step was repeated.Then, 100 L of 3,3 ,5,5 -tetramethylbenzidine (TMB) One-Step Substrate Reagent was added to each well and incubated for 30 min at room temperature in the dark with gentle shaking.50 L stop solution was added to each well and the plate was read at 450 nm immediately by a microplate reader (Varioskan Flash, Thermo Scientific, Waltham, MA, USA).The amounts of VEGF secreted from the untreated and treated cells were determined from the standard curve of optical density against known concentration of VEGF.All measurements were performed in triplicate and data are reported as mean ± SD.
Statistical Analysis.
The data obtained were shown as mean ± standard deviation (SD).The data were further analysed using analysis of variance (one-way ANOVA, followed by Tukey's post hoc analysis) using SPSS 23.0 (SPSS Inc., Chicago, IL, USA). value < 0.05 was considered significantly statistically different among the groups tested.
Results and Discussion
3.1.Preparation of Water-Soluble CS.CS has been widely used for various purposes, especially in biomedical application, but it has limited capacity as it is insoluble in water and can only dissolve in acidic medium.In physiological pH 7.4, CS is insoluble.In order to ensure that it can be utilised across wide pH range, it is important to improve the solubility of CS.Thus, in current study, water-soluble CS has been synthesised from LMW CS by mean of enzymatic degradation.Katas and Alpar [20] reported that the use of LMW CS can generate smaller mean particle size as compared to high molecular weight (HMW) CS for the individual CS derivatives.This was also supported by Ilyina et al. [21] who reported that partially hydrolysed CS with LMW has better water solubility due to shorter chain lengths and free amino groups in Dglucosamine unit.The nature of the dissolution of CS was the degradation of intermacromolecular hydrogen bonds and interchain hydrogen bonds which modified the structure of CS, reducing its crystallinity and unfolding its molecular chains [22].The LMW CS powder is in yellowish color, whereas the water-soluble CS produced is in creamy white color powder form.
FTIR spectra of water-soluble CS and LMW CS are shown in Figure 1.The broad spectrum observed in LMW CS is 3460.48cm −1 , indicating the presence of O-H stretching bond.There is a shift of band measured in water-soluble CS at 3413.14 cm −1 , also representing O-H stretching bond, but the peak width is narrower.C-H stretching can be seen in water-soluble CS at 2926.50 cm −1 .The characteristic peaks at Table 1: Particle Size, PDI, and zeta potential of CS-GO nanocomposites before and after DsiRNA loading and pectin coating ( = 3).
Before DsiRNA loading
After DsiRNA loading After pectin coating PS (nm) ± SD PDI ± SD ZP (mV) ± SD PS (nm) ± SD PDI ± SD ZP (mV) ± SD PS (nm) ± SD PDI ± SD ZP (mV) ± SD 308.2 ± 85.9 0.65 ± 0. 1639.35cm −1 and 1400.64 cm −1 represent amide 1 (C=O) and amide III (complex vibration of NHCO group) bands of CS, respectively.Degree of deacetylation decreases as CS turns to be water-soluble and this is proven by the presence of peak at 1567.56 cm −1 due to contribution of amide II band.Stretching of ether (C-O-C) bridge is observed at 1156.19 cm −1 .In LMW CS, C-O-C stretching is assigned at 1081.23 cm −1 and there is a slight shift of the same functional group presence in water-soluble CS (1080.24cm −1 ).Difference in FTIR spectra of LMW CS and water-soluble CS suggests that there is a cleavage of some glycosidic bonds introduced by hemicellulase enzyme.This reflects successful production of watersoluble CS from LMW CS.
Preparation of GO.
GO was synthesised by using Hummer's method from graphite prior to preparation of CS-GO nanocomposites.In Hummer's method, graphite was being oxidised by an oxidising agent (KMNO 4 ), and increase in interplanar spacing between the layers of graphite would later produce GO [19].In literature, the thickness of single-layer GO was reported in the range of 0.4 and 1.7 nm [23].This variation in the thickness of the single graphene layers could be attributed to different measurement conditions, sample preparation procedures, or other laboratory conditions [24].GO was then characterised by FT-IR analysis to investigate the bonding interactions in GO. Figure 2 depicts that GO has peak at 3434.48 cm −1 and it was attributed to O-H stretching of H 2 O molecules absorbed during GO synthesis.Besides, C-H stretching of aromatic group's peak is shown at 3186.01 cm −1 .In addition, since GO has planar structure with bundle of aromatic rings, C=C stretching of aromatic group can be observed at 1400.37 cm −1 .The peak at 1078.85 cm −1 was assigned to C-O stretching of ether functional group.This was supported by Paulchamy et al. [19] who also reported that C-O bond was observed at 1081 cm −1 , verifying existence of oxide functional groups after the oxidation process.Hence, GO was successfully synthesised from graphite by using Hummer's method.
Particle Size, Surface Charge, and PDI of Nanocomposites.
In current study, concentration of CS at 0.1% w/v was used as it produced nanocomposites with smaller particle size compared to higher concentrations of CS (0.2 and 0.3% w/v) (data was not shown).It was previously reported that lower concentrations of CS contributed to smaller particle size of nanoparticles owing to their low viscosity that produces efficient gelation procedure [25].In preliminary study, formulation of 0.1% w/v CS and 0.25% w/v GO had been shown to produce nanocomposites with the most favorable physical characteristics (Supplementary 1 in Supplementary Material available online at https://doi.org/10.1155/2017/4298218).Furthermore, the formulation had shown the desired release profile of DsiRNA because it allowed low release of DsiRNA in the simulated stomach and intestine fluids but with high cumulative release in the simulated colon fluid (Supplementary 2).
Based on these results, CS-GO nanocomposites were prepared from 0.1% w/v CS and 0.25% w/v GO.Table 1 presents the mean particle size of CS-GO nanocomposites before and after DsiRNA loading and pectin coating.GO prepared from graphite powder using Hummer's method was mixed with CS and the mean particle size measured was 308.2 ± 85.9 nm.CS is positively charged due to protonated amine and it will interact electrostatistically with negatively charged carboxyl group and phenolic group of GO to form strong interaction of CS-GO nanocomposites.These functional groups form intermolecular hydrogen binding and initiate very fine codispersion in the molecular space [26].
Upon the addition of DsiRNA to CS-GO nanocomposites, there was a significant reduction in the mean particle size measured.The particle size reduced to 125.2 ± 15.6 nm ( < 0.05, ANOVA followed by Tukey's post hoc analysis).This was expected to be due to the interaction of oppositely charged CS and DsiRNA molecules.Negatively charged backbone of DsiRNA allows electrostatic interaction with cationic CS-GO and this resulted in adsorption of DsiRNA to the nanocomposites which also condensed DsiRNA through neutralization into more compact shape.Furthermore, phosphate group of DsiRNA further increase negative charge of this nanocomposite which previously was contributed mainly by GO.As a result, more negative charges are able to neutralise the protonated amine group of CS and this can be proven by significant particle size reduction of CS-GO-DsiRNA nanocomposites.Raja et al. [27] reported that smaller particle size of the DsiRNA-loaded formulations was due to less extended or denser arrangement of CS molecules.Even though CS is a well-known carrier for drug delivery, it has major drawback that is related to its fast dissolution in the stomach due to solubility in acidic condition [28].To overcome this problem, an enteric coating is required to protect drug from being released in the stomach.Pectin has been used to coat the CS-GO-DsiRNA nanocomposites.The nanocomposite had increased particle size after coating with pectin.Polyelectrolyte complexes were formed when cationic amino groups on the C2 position of the repeating glucopyranose units of CS form electrostatic interaction with the negatively charged carboxyl groups of pectin [28].The mean particle size of pectin-coated CS-GO-DsiRNA nanocomposites was 554.5 ± 124.6 nm.Larger size of nanocomposites after pectin coating was expected due to the presence of pectin layer on the surface of CS-GO nanocomposites.On the other hand, PDI value of CS-GO nanocomposites was 0.65 ± 0.14, which indicated that particle size was broadly distributed.Interestingly, PDI value of these nanocomposites reduced to 0.45 ± 0.16 after loading with DsiRNA, indicating homogeneity of the particles.After coating with pectin, PDI increased slightly to 0.47 ± 0.19 and it was considered within the acceptable range of narrow particle size distribution.
Zeta potential of CS-GO nanocomposites measured in this study was +17.0 ± 1.4 mV.CS is a very hydrophilic biopolymer and it has polycationic properties.As CS is mixed in aqueous medium, the functional group of NH 2 and OH will be protonated to polycationic material.Meanwhile, surface of GO sheets is negatively charged when dispersed in water due to ionisation of carboxylic acid and phenolic hydroxyl groups on the GO sheets [26].The net positive charge decreases to +10.5 ± 1.4 after DsiRNA was loaded.Phosphate group of DsiRNA adsorbed onto the surface of CS-GO nanocomposites was attributed to the decrease in magnitude of zeta potential.The zeta potential of CS-GO-DsiRNA nanocomposites was further dropped and shifted from positive to negative value (−10.7 ± 3.0) after they were coated with pectin.This effect was owing to the negatively charged carboxylic acid (COO − ) of pectin that forms electrostatic interaction with positively charged amino group (NH 3+ ) of CS [28].This electrostatic interaction caused pectin to coat on CS-GO nanocomposites and led to reduction in zeta potential [29].Upon addition of DsiRNA, followed by pectin, there were significant changes in zeta potential, indicating existence of interactions at each step in synthesising nanocomposites ( < 0.05, ANOVA followed by Tukey's post hoc analysis).
ATR-FTIR Spectroscopic
Analysis.FTIR peak spectrum comparison of pectin, CS-GO, and CS-GO-pectin nanocomposites provides further verification of successful configuration among them as shown in Figure 3.The presence of oxygen functional groups such as epoxy, hydroxyl, carbonyl, and carboxyl groups on the basal plane and edges can be used to characterise GO.Hydroxyl group (-O-H) stretching was observed at 3342.75 cm −1 , 3325.49cm −1 , and 3330.66 cm −1 in different formulations consisting of CS-GO, pectin, and CS-GO-pectin, respectively.The peaks at 1773.18 cm −1 of pectin and 1772.80 cm −1 of CS-GO were assigned to C=O stretching of carbonyl group.The peak then shifted to 1767.15 cm −1 for CS-GO-pectin nanocomposites, and this might be attributed to conjugation that moves absorption to a lower number.This was further confirmed when Bao et al. [16] reported that residual carbonyl moieties (C=O) on the basal plane and periphery of GO can be represented by small absorbance peak appearing at 1740 cm −1 .The peak then shifted to 1767.15 cm −1 for CS-GO-pectin nanocomposites, and this might be attributed to conjugation that moved absorption to a lower number.Amine group characterised by C-N stretching in CS-GO and also CS-GO-pectin nanocomposites can be seen clearly at 1128.92 cm −1 and 1129.77cm −1 , respectively.
Morphological Analysis.
The morphology of GO, CS-GO, CS-GO loaded with DsiRNA, and pectin-coated CS-GO-DsiRNA nanocomposites was investigated using TEM that illustrates nanoscale visualisation of an individual particle.Most importantly, TEM can provide information of both particle size and morphology of the nanoparticles.GO analysed under an electron microscope appeared to be in layers of lateral dimension sheets with sharp edges (Figure 4(a)).
It was reported that GO sheets exist with very sharp edges and flat surfaces [16] that coincided with the GO produced in current study.Besides, large surface area of GO sheets is usually utilised as support for growth and stabilisation of nanoparticles [30].
Upon mixing GO into CS solution, irregular shape of nanocomposites was observed as shown in Figure 4(b).Similar morphology was also reported by Bao et al. [16], in which CS-GO appear to be coarse and some protuberances could be seen on the surface, which mainly were generated from the polymer wrapping and folding.Figure 4(c) shows formation of small, rounded, and smooth surface of CS-GO-DsiRNA nanocomposites.Small particle size reflects that DsiRNA is well adsorbed onto CS-GO nanocomposites.However, after the nanocomposites were coated with pectin, they appeared to fuse with one another and form aggregates.This could be seen from Figure 4(d).Generally, the estimated sizes for CS-GO, CS-GO-DsiRNA, and pectin-coated CS-GO-DsiRNA nanocomposites were 2.6-, 2.8-, and 5.5-fold smaller than those measured using dynamic light scattering (DLS).The difference in measured particle size using TEM and DLS might be attributed to different principles applied.The equivalent diameter used in particle size analysis for TEM and DLS is projected area and hydrodynamic diameter of diffusion area, respectively.Despite that, DLS provides a more accurate measurement as it represents the whole sample.
3.6.DsiRNA Entrapment and Binding Efficiencies and Nanocomposites Yield.High entrapment efficiency of 92.6 ± 3.9% was measured for CS-GO-DsiRNA (0.1% w/v CS and 0.25% w/v GO) nanocomposites, besides its high nanocomposites yield (77.1±4.6%), which is considered suitable for delivering a therapeutically active dose [31].High entrapment efficiency plays significant role in ensuring that drug is being delivered to its target successfully.CS-GO sheets have powerful capacity to entrap plasmid DNA to form compact complexes [16].Besides, graphene nanosheets have wide surface area that enables them to be functionalised maximally besides having great loading capacity.They also allowed encapsulation of molecules together for protection purpose and sustained release of loaded molecules which becomes a trend in current delivery applications [3].Pectin was further added as a coating agent to ensure that DsiRNA was entrapped on the surface of CS-GO sheets and protected from the harsh environment of the stomach and small intestine.Moreover, the coating has no significant effect on the entrapment efficiency as the percent was maintained after coating with 0.1% pectin (% EE = 90.2± 5.4, refer to Supplementary 1).Similar finding was observed for the yield of nanocomposites after coating with pectin (yield = 74.9±1.4%).On the other hand, Figure 5 shows the absence of a trailing band of DsiRNA.This indicated that DsiRNA has strong binding interaction with CS-GO nanocomposites and it was well protected by pectin layer.These results suggested that pectin coating did not affect the binding efficiency of anionic DsiRNA with cationic CS-GO nanocomposites or causing premature release of DsiRNA.
The results also further supported the finding of high DsiRNA entrapment efficiency for CS-GO.DsiRNA was found to be efficiently and tightly bound to CS nanoparticles as reported by Raja et al. [27].However, the binding of DsiRNA can be reversed and it will be released when the polymeric matrix degrades [27].In this situation, dissociation of pectin by the action of pectinase enzyme will also allow the release of DsiRNA in colon.
3.7.
In Vitro Drug Release.Two formulations with constant GO wt% but different pectin concentrations were tested to demonstrate the influence of pectin on drug release (Figure 6).From the results obtained, the amount of DsiRNA 2) ( = 3).CS-GO nanocomposites were prepared using 0.1% w/v CS and 0.25% GO.
released was slightly higher in formulation coated with 0.1% w/v pectin.It could be due to the surface of nanocomposite that was coated with lower amount of pectin and hence would be degraded faster than the formulation with higher pectin amount.Nevertheless, the difference was insignificant ( > 0.05, independent -test).In contrast to that, a study conducted by Kushwaha et al. [32] demonstrated that the rate of drug released was mainly affected by pectin concentration used to coat the drug.The percentage of cumulative release of DsiRNA in SGF (pH 2) for 2 h was approximately 1% indicating a slow drug release pattern.Moving to SIF (pH 6.8), the percentage increased to approximately 2%.Only small amount of DsiRNA was detected in the medium owing to the properties of pectin which is insoluble in acidic pH [32].In contrast to that, a remarkable increase in release of DsiRNA was observed in SCF (pH 7.2) (from ∼2% to ∼52%).The maximum release was achieved after 3 h in SCF and the percent measured was 65.6% and 63.9% for 0.1% and 0.2% w/v pectin, respectively.The exposure to SCF and pectinase enzyme causes pectin to be degraded, thus releasing greater amount of DsiRNA.Inside the human colon, numerous bacteria secrete enzymes that break down the polymer backbone, resulting in decrease in its molecular weight, thus losing their mechanical strength and ending up with the release of drug entity.Besides pectin, CS is also able to shield drugs from harsh stomach and small intestine environment [33].A successful colon drug delivery requires the triggering mechanism in the delivery system that only responds to the physiological conditions particular to the colon [34].The in vitro release study showed that pectin was not much degraded in the stomach and small intestine but was susceptible to enzymatic breakdown in the colon by the natural microflora.Therefore, it is a suitable candidate to deliver and target drugs to the colon.A high release rate of DsiRNA in SCF with the presence of pectinolytic enzyme pointed out the ability of pectin coating to hinder premature release of drug in upper GI tract.
Cell Growth Inhibition Effect and VEGF Downregulation.
The cytotoxic effects of nanocomposites developed in this study were determined in normal and cancerous cell lines.The cytotoxic effects of CS-GO-DsiRNA nanocomposites coated with pectin and its individual component were investigated against human colon adenocarcinoma cells (Caco-2 cells) and human colon normal cells (CCD-18CO cells).The percentage of cell viability of nanocomposites in Caco-2 and CCD-18CO is shown in Figure 7. CS-GO-DsiRNA nanocomposites coated with pectin caused significant reduction in cell viability towards Caco-2 cells in which the percent reduced to almost half from the untreated cells ( < 0.05, ANOVA followed by Tukey's post hoc analysis).In normal cell line, the cell viability recorded was 85% and it was higher than in cancer cells.
Moreover, 33% loss of cell viability was measured after Caco-2 cells were treated with pectin followed by 13% loss by GO.Apart from being utilised as specific drug carrier to the colon, pectin also has been studied for its antitumor activity.According to Glinsky and Raz [18], antitumor activity of pectin was attributed to abundant -galactose present in modified citrus pectin (MCP).The primary mechanism of action for MCP is by inhibiting a -galactoside binding protein galectin-3 (Gal-3), which is responsible for angiogenic activity.Interestingly, induction of pectin in healthy cells, CCD-18CO, did not display any reduction in cell viability.Similar finding was reported elsewhere [2] in which pectin acts more destructively against Caco-2 colon cancer cells but not against healthy VERO cells.The selective killing effect towards cancer cells makes it a precious natural polysaccharide.In addition, GO used in this study has also been shown to enhance cytotoxicity effect displayed by final formulation.Recent study claimed that negatively charged oxygen group of GO can form electrostatic interaction with positively charged lipids present on cell membranes, thus destroying the cell membrane [30].In contrast to that, GO did not cause decreased cell viability in normal colon cells.This might be due to low amount of GO which was used in this assay (250 g/mL).Insignificant loss of peritoneal macrophages viability was also reported previously when the cells were exposed to GO at low concentration [30].In addition, both cell lines treated with uncoated formulation did not show any decrease in cell viability.The absence of pectin that is responsible for enhancing antitumor activity further eliminates the inhibitory effect of DsiRNA and GO.
In this study, DsiRNA targeting VEGF gene was used as the gene is excessively expressed in Caco-2 cells.It was reported that increased metastasis risk in colon cancer correlates well with VEGF expression [35].VEGF is most commonly associated with tumor angiogenesis which involved tumor growth and metastasis in human colon cancer [36] and it is supported elsewhere [37].The VEGF concentration level in Caco-2 colon cancer cells was determined by ELISA after being treated with pectin-enveloped CS-GO, DsiRNA-loaded CS-GO, and pectin-enveloped CS-GO-DsiRNA nanocomposites and naked DsiRNA for 24 h.From Figure 8, there was no significant change in VEGF concentration level observed after Caco-2 colon cancer cells were treated with pectin-enveloped CS-GO (blank nanocomposites) and naked DsiRNA for 24 h as compared to negative control (untreated cells), indicating the absence of a gene silencing effect.However, a lower VEGF concentration level was significantly ( value < 0.05) measured for the cells treated with pectinenveloped DsiRNA-loaded CS-GO nanocomposites after 24 incubation as shown in Figure 8.This finding demonstrated that pectin was able to protect and facilitate DsiRNA to accumulate at and act on the site of action which subsequently resulted in significant decrease of VEGF level [38].
Conclusions
In summary, pectin-coated CS-GO-DsiRNA nanocomposites were successfully developed via electrostatic interaction.FTIR analysis further confirmed successful synthesis of GO and CS-GO-pectin nanocomposites.The results revealed significant difference in particle size at each stage of nanocomposites preparation which signifies production of CS-GO-DsiRNA-pectin nanocomposites.Moreover, high entrapment efficiency of these nanocomposites indicated that DsiRNA was effectively adsorbed onto CS-GO carrier.CS-GO-DsiRNA nanocomposites coated with pectin selectively killed cancer cells, which could be a promising therapeutic agent for cancer treatment.Moreover, VEGF concentration level significantly decreased after 24 h incubation with the pectin-enveloped CS-GO-DsiRNA nanocomposites; thus it indicated the ability of nanocomposites to deliver DsiRNA effectively into the cells and subsequently cause gene silencing on the target gene.Further studies in animal model in vivo are necessary to study the safety and efficacy of CS-GO-DsiRNA nanocomposites coated with pectin.
Figure 1 :
Figure 1: FTIR spectra of LMW CS and water-soluble CS.
Figure 2 :
Figure 2: FTIR spectrum of GO prepared via Hummer's method.
Figure 7 :
Figure 7: Cytotoxicity effect of Caco-2 and CCD-18CO cells exposed to nanocomposites and their parent compounds at 24 h after incubation ( = 3). | 8,792 | 2017-05-25T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Sasaki–Einstein 7-manifolds and Orlik’s conjecture
We study the homology groups of certain 2-connected 7-manifolds admitting quasi-regular Sasaki–Einstein metrics, among them, we found 52 new examples of Sasaki–Einstein rational homology 7-spheres, extending the list given by Boyer et al. (Ann Inst Fourier 52(5):1569–1584, 2002). As a consequence, we exhibit new families of positive Sasakian homotopy 9-spheres given as cyclic branched covers, determine their diffeomorphism types and find out which elements do not admit extremal Sasaki metrics. We also improve previous results given by Boyer (Note Mat 28:63–105, 2008) showing new examples of Sasaki–Einstein 2-connected 7-manifolds homeomorphic to connected sums of S3×S4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S^3\times S^4$$\end{document}. Actually, we show that manifolds of the form #kS3×S4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\#k\left( S^{3} \times S^{4}\right) $$\end{document} admit Sasaki–Einstein metrics for 22 different values of k. All these links arise as Thom–Sebastiani sums of chain type singularities and cycle type singularities where Orlik’s conjecture holds due to a recent result by Hertling and Mase (J Algebra Number Theory 16(4):955–1024, 2022).
Introduction
In the last 20 years, several techniques have been developed to determine the class of manifolds that admit metrics of positive Ricci curvature.For odd dimensions, Boyer, Galicki and collaborators established the abundance of positive Einstein metrics, actually Sasaki-Einstein metrics.In dimension five there are quite remarkable results (see [20], [9], [15]) that leads us to think it is conceivable to give a complete classification there.For dimension 7, there are several important results, in particular 7-manifolds arising as links of hypersurface singularities were studied intensively using as framework the seminal work of Milnor in [21] where S 3 -bundles over S 4 with structure group SO(4) were studied.Using the Gysin sequence for these fiber bundles one obtains three classes of 7-manifolds determined by the Euler class e: a) If e = ±1 then the fiber bundle has the same homology of the 7-sphere.b) If |e| 2 then the fiber bundle is a rational homology 7-sphere.c) If e = 0 then the fiber bundle has the homology of S 3 × S 4 .Milnor showed that manifolds situated on the class described in item a) were homeomorphic to S 7 .Furthermore, he proved that some of these bundles are not diffeomorphic to S 7 and as a result of that exhibited the first examples of exotic spheres.
An interesting example of manifolds described in b) is the Stiefel manifold of 2-frames in R 5 , V 2 (R 5 ).It is known that this manifold can be realized as a link of quadric in C 5 , and that V 2 (R 5 ) is a rational homology 7-sphere and moreover admits regular Sasaki-Einstein metric [3,1].This example was a source of new techniques that led to establish the existence of many Sasaki-Einstein structures.Actually in [3], a method for proving the existence of Sasaki-Einstein metrics on links of hypersurface singularities is described.In a sequel of papers [9], [6], [10], [11], Boyer, Galicki and their collaborators showed the existence of Sasaki-Einstein structures on exotic spheres, 2-connected rational homology 7-spheres and connected sums of S 3 × S 4 respectively.
In general, the homeomorphism type of links are not easy to be determined.However, for certain cases, one can calculate the integral homology of the links through formulas derived by Milnor and Orlik in [22] and Orlik in [23].In fact, Boyer exhibited fourteen examples [11] of Sasaki-Einstein 7-manifolds arising from links of isolated hypersurface singularities from elements of the well-known list of 95 codimension one K3 surfaces ( [18,12]).For these, the third integral homology group is completely determined.In [16] Gomez, calculated the torsion for the third integral homology group explicitly for ten examples of Sasaki-Einstein links of chain type singularities (also known as Orlik polynomials).
In this paper we benefit from a recent result by Hertling and Mase in [17] where they show that the Orlik conjecture is valid for chain type singularities, cycle type singularities and Thom-Sebastiani sums of them, and we update results given in [7], [11] and [16].From the list of 1936 Sasaki-Einstein 7-manifolds realized as links from the list given in [7,19] we detect 1673 that are links of hypersurface singularities of these types.Thus, via Orlik's algorithm we calculate the third homology group for this lot.Among them, we found 52 new examples of 2-connected rational homology spheres admitting Sasaki-Einstein metrics.We also found 124 new examples of 2-connected Sasaki-Einstein 7-manifolds of the form #2k(S 3 × S 4 ), improving a result of Boyer's in [11].Six of these new examples are links of quasi-smooth Fano 3-folds coming from Reids list of 95 weighted codimension 1 K3 surfaces [18], [12], the rest of them, are links taken from the list given in [19].
In recent years Sasaki-Einstein geometry has been intensely studied, part of this interest comes from a string theory conjecture known as the AdS/CFT correspondence which, under certain circumstances, relates superconformal field theories and Sasaki-Einstein metrics in dimension five and seven [27].Also, certain rational homology spheres can be used to construct positive Sasakian structures in homotopy 9-spheres [8].Thereby, it is important to have as many examples as possible of Sasaki-Einstein manifolds especially in dimensions five and seven.
This paper is organized as follows: in Section 2 we give some preliminaries on the topology of links of hypersurface singularities.In Section 3, as a consequence of the explicit calculations on the topology of new rational Sasaki-Einstein homology 7-spheres given in this paper, we update some results on homotopy 9-spheres admitting positive Ricci curvatures given in [8].Then we present Table 1 (listing new examples of rational homology 7-spheres admitting Sasaki-Einstein metrics), Table 2 (listing new examples of 7-links homeomorphic to connected sums of S 3 ×S 4 admitting Sasaki-Einstein metrics produced from the Johnson and Kollár list) and Table 3 (listing new examples of Sasaki-Einstein 7-links homeomorphic to connected sums of S 3 × S 4 produced from Cheltsov's list).In these three tables we list the weights, one quasihomogenous polynomial generating the link, the type of singularity, the degree, the Milnor number and finally the third homology group.In Section 4, we give a link to the four codes implemented in Matlab, these codes determine whether or not the links come from the admissible type of singularities where Orlik's conjecture is valid, and compute the homology groups of the links under discussion.we also give links to three additional tables (listing 7-links with non-zero third Betti number and with torsion).
Acknowledgments.The first author thanks Ralph Gomez and Charles Boyer for useful conversations.Part of this article was prepared with the financial support from Pontificia Universidad Católica del Perú through project DGI PI0655.
Preliminaries: Sasaki-Einstein metrics on links and Orlik's conjecture
In this section we briefly review the Sasakian geometry of links of isolated hypersurface singularities defined by weighted homogeneous polynomials.We describe the explicit constructions of Sasaki-Einstein manifolds given by Boyer and Galicki [5].Then we give some known facts on the topology of links of hypersurface singularites [21], [22] and state Orlik's conjecture.We also set up a table with the necessary conditions to obtain links where this conjecture is known to be valid.
2.1.
Links and Sasaki-Einstein metrics.Consider the weighted C * action on C n+1 given by (z 0 , . . ., z n ) −→ (λ w 0 z 0 , . . ., λ wn z n ) where w i are the weights which are positive integers and λ ∈ C * .Let us denote the weight vector by w = (w 0 , . . ., w n ) .We assume gcd (w 0 , . . ., w n ) = 1.Recall that a polynomial f ∈ C [z 0 , . . ., z n ] is said to be a weighted homogeneous polynomial of degree d and weight w = (w 0 , . . ., w n ) if for any λ ∈ C * = C\{0}, we have We are interested in those weighted homogeneous polynomials f whose zero locus in C n+1 has only an isolated singularity at the origin.The link where S 2n+1 is the (2n + 1)-sphere in C n+1 .By the Milnor fibration theorem [21], -manifold that bounds a parallelizable manifold with the homotopy type of a bouquet of n-spheres.Furthermore, L f (w, d) admits a quasi-regular Sasaki structure in a natural way, see for instance [26].Moreover, if one considers the locally free S 1 -action induced by the weighted C * action on f −1 (0) the quotient space of the link L f (w, d) by this action is the weighted hypersurface Z f , a Kähler orbifold.Actually, we have the following commutative diagram [5] L where S 2n+1 w denotes the unit sphere with a weighted Sasakian structure, P(w) is a weighted projective space coming from the quotient of S 2n+1 w by a weighted circle action generated from the weighted Sasakian structure.The top horizontal arrow is a Sasakian embedding and the bottom arrow is Kählerian embedding and the vertical arrows are orbifold Riemannian submersions.
It follows from the orbifold adjuntion formula that the link L f admits a positive Ricci curvature if the quotient orbifold Z f by the natural action S 1 is Fano, which is equivalent Here |w| = m i=0 w i denotes the norm of the weight vector w and d f is the degree of the polynomial f .Furthermore, in [3], Boyer and Galicki found a method to obtain 2connected Sasaki-Einstein 7-manifolds from the existence of orbifold Fano Kähler-Einstein hypersurfaces Z f in weighted projective 4-space P(w).Actually, they showed a more general result: Theorem 2.1.The link L f (w, d) admits a Sasaki-Einstein structure if and only if the Fano orbifold Z f admits a Kähler-Einstein orbifold metric of scalar curvature 4n(n + 1).
In [19], Johnson and Kollár give a list of 4442 quasi-smooth Fano 3 -folds Z anticanonically embedded in weighted projective 4-spaces P(w).Moreover, they show that 1936 of these 3 -folds admit Kähler-Einstein metrics.Thus, such Fano 3-folds give rise to Sasaki-Einstein metrics on smooth 7-manifolds realized as links of isolated hypersurface singularities defined by weighted homogenous polynomials.In [7] they extracted from this list 184 2-connected rational homology 7-spheres.They also determined the order of H 3 (L f (w, d), Z).In [16], Gomez used Orlik's conjecture to calculate the homology of 10 elements of the list given in [7], all the 7-manifolds found there are links of chain type singularities.
In this paper we completely determine the third homology group for 1673 of 2-connected Sasaki-Einstein 7-manifolds from the list 1936 smooth 7-manifolds realized as links of isolated hypersurface singularities from the list given in [7].Among them, we found 52 new examples of 2-connected rational homology spheres admitting Sasaki-Einstein metrics.We also found 124 new examples of 2-connected Sasaki-Einstein 7-manifolds of the form #2k(S 3 × S 4 ), improving a result of Boyer in [11], see Theorem 3.1.Of that lot, 118 come from the list given in [19], the other 6 examples are links of quasi-smooth Fano 3-folds coming from Reids list of 95 weighted codimension 1 K3 surfaces, where all members of the list but 4 admit Kähler-Einstein metrics, see [12].
2.2.The Topology of Links and Orlik's Conjecture.In this section we review some classical results on the topology of links of quasi-smooth hypersurface singularities.Recall that the Alexander polynomial ∆ f (t) in [21] associated to a link L f of dimension 2n − 1 is the characteristic polynomial of the monodromy map induced by the S 1 w -action on the Milnor fibre F .Therefore, ∆ f (t) = det (tI − h * ).Now both F and its closure F are homotopy equivalent to a bouquet of n-spheres S n ∨ • • • ∨ S n , and the boundary of F is the link The following are standard facts, see [5].
(1) L f is a rational homology sphere if and only if ∆ f (1) = 0.
(3) If L f is a rational homology sphere, then the order of There is a remarkable theorem of Levine (see [21], page 69) that determines the diffeomorphism type for rational homology spheres.More preceisely, we have Theorem 2.2.Let L f be homeomorphic to the (2n − 1)-sphere for n odd.L f is diffeomorphic to the standard sphere if ∆ f (−1) ≡ ±1(mod8) and L f is diffeomorphic to the exotic Kervaire sphere if ∆ f (−1) ≡ ±3(mod8).
In the case that f is a weighted homogeneous polynomial there is an algorithm due to Milnor and Orlik [22] to calculate the free part of H n−1 (L f , Z) .The authors associate to any monic polynomial f with roots α 1 , . . ., α k ∈ C * its divisor where the u ′ i s and v ′ i s are given terms of the degree d of f and the weight vector w = (w 0 , . . .w n ) by the equations Using the relations where a j ∈ Z and the sum is taken over the set of all least common multiples of all combinations of the u 0 , . . ., u n .Then the Alexander polynomial has an alternative expression given by ∆ f (t) = (t − 1) (−1) n j t j − 1 a j , and Moreover, Milnor and Orlik gave and explicit formula to calculate the free part of where the sum is taken over all the 2 n+1 subsets {i 1 , . . ., i s } of {0, . . ., n}.In [23], Orlik gave a conjecture which allows to determine the torsion of homology of the link solely in terms of the weight of f.
Conjecture 2.3 (Orlik).Consider {i 1 , . . ., i s } ⊂ {0, 1, . . ., n} the set of ordered set of s indices, that is, Let us denote by I its power set (consisting of all of the 2 s subsets of the set), and by J the set of all proper subsets.Given a (2n + 2)-tuple (u, v) = (u 0 , . . ., u n , v 0 , . . ., v n ) of integers, let us define inductively a set of 2 s positive integers, one for each ordered element of I, as follows: .
Similarly, we also define a set of 2 s real numbers by where respectively.Finally, for any j such that 1 j r = ⌊max {k i 1 ,...,is }⌋, where ⌊x⌋ is the greatest integer less than or equal to x, we set This conjecture was known to hold in certain special cases [24].In a rather recent paper, Hertling and Mase [17], extend the list of possible cases and showed that this conjecture is true for the following cases: (1) Chain type singularity: that is, a quasihomogeneous singularity of the form for some n ∈ N and some a 1 , . . ., a n ∈ N.
(2) Cycle type singularity: a quasihomogeneous singularity of the form for some n ∈ Z 2 and some a 1 , . . ., a n ∈ N which satisfy for even n neither a j = 1 for all even j nor a j = 1 for all odd j. (3) Thom-Sebastiani iterated sums of singularties of chain type or cyclic type.Recall, for f and g be singularities, the Thom-Sebastiani sum is given by Any iterated Thom-Sebastiani sum of chain type singularities and cycle type singularities are also called invertible polynomials.(4) Although Brieskorn-Pham singularities, or BP singularities x a i i for some n ∈ N and some a 1 , . . ., a n ∈ Z 2 are special cases of chain type singularities we prefer to label them as an independent type of singularity.
In order to use Orlik's conjecture, we need to find from the two lists of Kähler-Einstein oribifolds aforementioned, elements with weights w = (w 0 , w 1 , w 2 , w 3 , w 4 ) and degree d = (w 0 + • • • + w 4 ) − 1 that can be represented by BP singularities, chain type singularities, cycle type singularities or Thom Sebastiani sum of these types of singularities.Thus, given a weight vector w = (w 0 , w 1 , w 2 , w 3 , w 4 ) one needs to determine if there exist exponents a i 's that verify certain arithmetic condition.We include these conditions in the table below.
Most of the examples that we describe in this article have large weights and degrees, so in order to avoid monotonous calculations, codes were implemented in Matlab (see (d) in Appendix) to determine for a given weight vector w = (w 0 , w 1 , w 2 , w 3 , w 4 ) the exponents a i 's, such that the singularity can be written as a chain type, cycle type or Thom-Sebastiani sum of them.We also wrote a code to computes the Betti numbers and the numbers d i which generate the torsion in H 3 (L f , Z).
Type Polynomial Condition
Next, we discuss an interesting result on the topology of the link if the degree d and the weight vector w are such that gcd(d, w i ) = 1 for all i.Several elements on the tables we present in Section 3, satisfy this condition.Notice that, if the type of singularities is restricted to the ten cases described on the table given above, then gcd(d, w i ) = 1 for all i forces the singularity to be of cycle type.We restrict to dimension 7, but this remark can be easily generalized for any dimension.
Lemma 2.4.Consider a 7-manifold M arising as a link of a hypersurface singularity with degree d and such that gcd(d, w i ) = 1 for all i = 0, . . .4, then µ + 1 = d(b 3 + 1) and H 3 (M, Z) tor = Z d .In particular, for hypersurface singularities that determine a rational homology sphere with gcd (d, w i ) = 1 for all i we obtain exactly d − 1 vanishing cycles.
Since gcd(d, w i ) = 1 we have u i = d and v i = w i .Then the above formula can be simplified as follows On the other hand, the Milnor number is given by µ From ( 8), we obtain µ + 1 = d(b 3 + 1).For the second claim, one notices that gcd(d, Then, for j = 1, we get d 1 = d, while in other cases, d j = 1.Therefore, from Equation ( 7) one concludes that if gcd(d, w i ) = 1 for all i = 0, . . .n then H 3 (M, Z) tor = Z d .
Results
In this section we present three tables of new examples of two classes of 7-manifolds: rational homology 7-spheres, and 7-manifolds homeomorphic to connected sums of S 3 × S 4 , all admitting Sasaki-Einstein metrics.These new examples are extracted from two lists of Kähler-Einstein orbifolds, the first one given by Johnson and Kollár in [19] (see (a) in Appendix for link to the list) and the other one given by Cheltsov in [12].In these three tables we list the weights, the quasihomogenous polynomial generating the link, the type of singularity, the degree, the Milnor number and finally the third homology group.But, first, we discuss the diffeomorphism type of certain homotopy spheres admitting positive Ricci curvature.
3.1.
Positive Ricci curvature on homotopy spheres.The existence of positive Sasakian metrics on homotopy spheres was study in detail by Boyer, Galicki and collaborators, where the authors exhibited inequivalent families of homotopy spheres admitting this type of metrics based on their list of 184 rational homology 7-spheres admitting Sasakian-Einstein metrics.We apply the methods developed in [7] and [8] to the list of the new 52 rational homology 7-spheres presented in this paper.Let us briefly review some basic facts on links viewed as branched covers.The main reference here is [8].
Let f = f (z 1 , . . ., z m ) be a quasi-smooth weighted homogeneous polynomial of degree d f in m complex variables, and let L f denote its link.Let w f = (w 1 , . . ., w m ) be the corresponding weight vector.We consider branched covers constructed as the link L g of the polynomial g = z p 0 + f (z 1 , . . ., z m ) , for p > 1.Then L g is a p-fold branched cover of S 2m+1 branched over the link L f .The degree of L g is d g = lcm (p, d f ), and the weight vector is We have Thus, from Equation (1) and Theorem 2.1 it follows that L g admits positive Ricci curvature for all p > 0 if L f admits a Sasaki-Einstein metric.
Furthermore, in [8] the authors showed that the link L g is a homotopy sphere if and only if L f is a rational homology sphere.Let us recall the argument, since it will be useful for our purpose.Here we assume that gcd (p, d f ) = 1.
For the divisor div ∆ g we have the equalities Since the j 's run through all the least common multiples of the set {u 1 , . . ., u n } and gcd (p, u i ) = 1 for all i, we note that for all j, gcd(p, j) = 1.Thus So L g is a rational homology sphere.Now, the computation of the Alexander polynomial for L g leads to: This gives ∆ g (1) = p Σ j a j +(−1) n .
Thus from Equation ( 5) it follows that L g is a homotopy sphere if and only if L f is a rational homology sphere.
In [8], the authors made use of Equation (10) and apply Theorem 2.2 to determine the diffeomorphism type for L g for 7-dimensional links L f coming from their list of rational homology 7-spheres [7].If L f has odd degree they proved.Theorem 3.1 (Boyer et al. [7]).Let L g be the link of a p-branched cover of S 2n+1 branched over a link L f in their list of 184 rational homology spheres given in [7].Suppose that degree d f of L f is odd and that gcd(p, d f ) = 1.Then, for p odd, L g is diffeomorphic to the standard 9-sphere S 9 .For p even, L g is diffeomorphic to The next theorem is an update of Theorems 7.6 and Theorems 7.7 in [8] and its proof is similar to the one given by Boyer et al.
Theorem 3.2.Let us assume for gcd(p, d f ) = 1, then for each member of the 52 rational homology spheres in our list given in Table 1, L g is a homotopy sphere admitting positive Ricci curvature.If L f is taken to be one of the 49 rational homology spheres with odd degree listed in Table 1, then L g admits Sasakian metrics with positive Ricci curvature and these are diffeomorphic to the standard sphere S 9 .For the remaining three members with even degree in Table 1, we have (2) For L f with w = (64, 512, 475, 375, 175) with degree d f = 1600, L g diffeomorphic to the standard S 9 if p 3 ≡ ±1(mod8) and diffeomorphic to the exotic Kervaire 9-sphere Σ 9 if p 3 ≡ ±3(mod8).
Proof.Our list of rational homology 7-spheres admitting Sasaki-Einstein metrics contains 49 elements with odd degree and all of these have order |H 3 | ≡ 1(mod8), thus, the first part of the theorem follows from Theorem 3.1.For the three members with even degree in Table 1, the weight vectors can be written as w = (w 0 , w where gcd(m 2 , m 3 ) = 1 and m 2 m 3 = d where m 2 is odd and m 3 is even.For these one obtains where n(w) is a positive integer.It follows from Equation (4) that j even a j = n(w) + 1.
On the other hand, bearing in mind Theorem 2.2, we pay attention to ∆ g (−1) : since p is odd and d even, from Equation (10) we have Since L f is a rational homology sphere H 3 (L f , Z) tor = ∆(1), so in order to compute ∆(1) we rewrite the equation (11) in terms of the Alexander polynomial: . The result follows by checking the torsion of H 3 (L f , Z) at Table 1 below.3.2.Rational homology 7-spheres admitting Sasaki-Einstein structures from Johnson-Kollár list.In [19] Johnson and Kollár gave a list of 4442 of well-formed quasismooth Q-Fano 3-folds of index one anti-canonically embedded in CP 4 (w).This list contains 1936 3-folds that admit a Kähler-Einstein metric.Theorem 2.1 implies that the corresponding link admits Sasaki-Einstein metrics.These links were studied in [7] and they found 184 rational homology 7-spheres.In the following table, we present 52 new examples of 2-connected rational homology spheres admitting Sasaki-Einstein metrics.All the rational homology 7-spheres that we exhibit in Table 1 come from polynomials that are of cycle type or that have that type of singularity as part of its Thom-Sebastiani representation.The examples found in [7] also have that particular feature.Actually, in [7], Lemma 3.3 provides examples of cycle type singularities, while Lemma 3.10 provides examples of cycle+cycle type singularities.Indeed, if we can write the vector w = (w 0 , w 1 , w 2 , w 3 , w where gcd(m 2 , m 3 ) = 1 and m 2 m 3 = d, then Lemma 3.10 in [7] implies that the number is a positive integer.This is equivalent to Multiplying by m 3 , we have d = w 3 + w 4 a 4 (12) where a 4 > 1 is integer.Analogously, we get m 2 = v 4 + v 3 (1 + n(w)v 4 ).Multiplying by m 2 , we obtain d = w 4 + w 3 a 3 (13) From ( 9) and ( 10) it follows that there exist a cycle block z 4 z a 3 3 + z 3 z a 4 4 that is a summand of some invertible polynomial associated to w.We conjecture that all rational homology 7-spheres arising as links of hypersurface singularities can be given by polynomials that must contain a cycle singularity term in its representation as a Thom-Sebastiani sum, at least when d = 4 i w i + 1. Remark 3.1.In [5], from links L ′ f of quasi-smooth weighted hypersurfaces f ′ (z 2 . . .z n ), the authors consider the quasi-smooth weighted hypersurfaces . It is not difficult to show that the corresponding link L f admits Sasakian structures with positive Ricci curvature.Additionally, if the link L ′ f is a rational homology sphere, from Sebastiani-Thom theorem [25] it follows that the link L f is a rational homology sphere.Thus, from each of the new 52 rational homology 7-spheres one can produce a rational homology 11-sphere admitting positive Ricci curvature.
Following the terminology given in [7], a rational homology 7-spheres with the same degree d, Milnor number µ and order of H 3 is a twin.We detect ten twins in Table 1; except the couple given by the weights (2323, 1611, 562, 151, 899) and (2387, 1579, 661, 148, 771), both of them of cycle type, with d = |H 3 | = 5545 and µ = 5544, the rest of twins have weight vectors with identical first two components w 0 , w 1 .As mentioned in [7], it is tempting to conjecture that twins are homeomorphic or even diffeomorphic links, however we are not able to determine this.In an upcoming article, we study the behavior of twins trough an operation associated to the Berglund-Hübsch transpose rule from BHK mirror symmetry [2].
3.3.Sasaki-Einstein 7-manifolds of the form #k(S 3 × S 4 ) from the Johnson-Kollár's list and Cheltsov's list.In [3] it is proven that a 2-connected oriented 7manifold M that bounds a parallelizable 8-manifold with H 3 (M, Z) torsion free is completely determined up to diffeomorphism by the rank of H 3 (M, Z).Moreover M is diffeomorphic to #k S 3 × S 4 #Σ 7 for some homotopy sphere Σ 7 ∈ bP 8 that bounds a parallizable 8-manifold (one of the 28 possible smooth structures on the oriented 7-sphere).Thus, if the link has no torsion it is homeomorphic to k# S 3 × S 4 where k is the rank of the third homology group.In [5], it is showed that, being of Sasaki type, the link has k even.
Until now, the only cases where Sasaki-Einstein metrics were known to exist on 7manifolds of the form #k S 3 × S 4 are #222 S 3 × S 4 and #480 S 3 × S 4 .We found 124 new examples of 2-connected Sasaki-Einstein 7-manifolds of the form #k(S 3 × S 4 ).Of that lot, 118 come from the list given in [19], the other 6 examples are links of quasi-smooth Fano 3-folds coming from Reids list of 95 weighted codimension 1 K3 surfaces.We can state the following result.2 and Table 3.
Notice that in Table 2, k assumes every even number between 2 and 32 except k = 22 and k = 28, however the elements in Table 3 give sporadic values for k. i=1 w i = d can be used to generate Q-Fano 3-folds [18].These threefolds are hypersurfaces X d of degree d in weighted projective spaces of the form CP(1, w 1 , w 2 , w 3 , w 4 ) with 4 i=1 w i = d.In [12], Cheltsov studied these sort of Q-Fano 3-folds and proves that 91 elements admit Kähler-Einstein orbifold metrics and thereby the links associated to them admit Sasaki-Einstein metrics.The four that fail the test are numbers 1, 2, 4, and 5 in the list given in [18].From this list, we found that 88 of these links are links of singularities of chain type, cycle type or an iterated Thom-Sebastiani sum of chain type singularities and cycle type singularities.We compute the third homology group for this lot and found 6 new examples of Sasaki-Einstein 7-manifolds with no torsion on the third homology group.In Table 3 we present these six elements and include the two elements found by Boyer in [11], the ones with weights (1, 1, 1, 4, 6) and (1,1,6,14,21).This list is not necessary exhaustive for Cheltsov's list, since neither numbers 52, 81 nor 86 in [18] admit descriptions in terms of the type of singularities where Orlik's conjecture is known to be valid.However, our computer program suggests that these members do have torsion.Also, it is interesting to notice that, the Q-Fano 3-folds considered by Cheltsov are d-fold branch covers for certain weighted projective spaces branched over orbifold K3 surfaces of degree d.It follows from a well-known result on links of branched covers (see Proposition 2.1 in [14]), that the 91 Sasaki-Einstein links associated to the 3-folds of Cheltsov's list can be realized as d-fold branched covers of S 7 branched along the submanifolds #k(S 2 × S 3 ).Actually, we extend Theorem 4.6 in [11].Theorem 3.4.There exist Sasaki-Einstein metrics on the 7-manifolds M 7 which can be realized as d-fold branched covers of S 7 branched along the submanifolds #k(S 2 × S 3 ) where
Theorem 3 . 3 .
Sasaki-Einstein metrics exist on 7-manifolds of the form #k S 3 × S 4 for 22 different values of k, where k = rank(H 3 (M, Z)) is given in Table
Table 2 :
SE 7-manifolds of the form #k(S 3 × S 4 ) from the Johnson-Kollár list Reid's list of 95 codimension one K3 surfaces of the form Y d ⊂ CP(w 1 , w 2 , w 3 , w 4 ) with | 7,485.8 | 2022-08-23T00:00:00.000 | [
"Mathematics"
] |
Preparation, micro-structure andmechanical properties of porous carbonate apatite using chicken bone as rawmaterial
Use your smartphone to scan this QR code and download this article ABSTRACT Introduction: The increasing in bone loss due to trauma disease or accident leads to the demand for bone substitute materials. Autograft is the gold standard for bone graft; however, it has limited supplies and additional surgery for harvesting. Therefore, the artificial bone graft is necessary for bone defect treatment. One of the potential candidates for bone repair and regeneration is porous Carbonate Apatite (CO3Ap) due to its excellent biodegradability and biocompatibility. Understanding the importance of recycling ofwastematerials and by-products from the food industry such as poultry bone could be used as the alternative source for starting material. Thus, recycling and adding value for the food waste could be done with the development of artificial bone graft. Method: In this study, porous CO3Apwas prepared using bio apatite powder derived from chicken bone via the sacrificial template technique. Templates such as chopped cotton fibers were used to make porous CO3Ap with interconnecting pores. X-ray diffraction (XRD) and Scanning electron microscope (SEM) were used to characterize phase composition and structure. Carbonate content was evaluated using elemental analysis CHN (carbon hydrogen and nitrogen) analyzer. Results: The results indicated that a pure CO3Ap block consisting of elongating pores could be obtained. Mechanical strength was evaluated in terms of Diametral Tensile Strength (DTS). Porous CO3Ap had a mean DTS value of 198.5 ± 12.9 kPa and an average porosity of approximately 70%. As a conclusion, CO3Ap containing 5.25 ± 0.03 %wt of carbonate with low crystallinity and the interconnecting porous structure is advantageous on bone regeneration as being a useful material for application to repair the bone defect.
INTRODUCTION
During the last few decades, there is a tendency to an increase of cartilage and bone-related diseases, skeletal defects, tumor resection, skeletal abnormalities and bone fractures originating from trauma. Human bone can regenerate itself. However, this ability is reduced with time. Moreover, the incomplete repair sometime occurred in case of significant defects. Although auto-graft is the gold standard for bone substitutes, it has many drawbacks such as limited supply, trauma and additional pain for patients. Therefore, the artificial bone graft is necessary to satisfy the demand for bone fracture treatment. Currently, porous bioceramics have played an essential role in clinical application as bone substitute materials. One of the most popular topics focuses on interconnecting porous carbonate apatite due to its excellent biodegradability and biocompatibility. Carbonate apatite (CO 3 Ap) has a composition similar to that of natural bone exhibiting good osteoconductive and osteogenic properties 1-3 . CO 3 Ap as bone substitute materials for implant surgery could be granule, block or cement. Granule, block and cement could be prepared from CO 3 Ap powder. Several methods were conducted to synthesize CO 3 Ap powder such as precipitation, hydrothermal and mechanical activation [4][5][6][7][8][9][10] . Besides, CO 3 Ap powder was also extracted from natural bone sources such as pig and cow bone or teeth, etc, This method is advantageous on economic and environmental benefits since it used the waste product of the food industry as the raw material [11][12][13] . Several methods were used to fabricate porous material from powder such as replica technique, direct foaming, etc. In order to recycle and add value for the food waste, in this study, the chicken bone was collected and used as raw materials for fabrication bone substitute. The sacrificial template technique was used for preparing porous carbonate apatite (CO 3 Ap) due to its simplicity. Chopped cotton fibers are available in the form of long rods, and incorporation of these materials into a powder mix can produce interconnecting porosity.
MATERIALS AND METHOD Preparation
Chicken bone was collected from food waste. It was firstly removed the meat and fat residues, then rinsed with water. Bone was immersed in 2% sodium chloride (NaCl, 7548-4100, Daejung chemical & metals Co. Ltd, Gyeonggi-do, Korea) solution for 2h at 100 o C for further removing of the other debris.
Then, the bone was washed and filtered several times with distilled water using a vacuum bump until the filtrate was neutral.
The prepared bone was dried in the oven for 24h and ground until as-prepared powder can pass through the 45-mm sieve. Cotton fibers used as a sacrificial template were cut in 2-3 mm in length.
Chicken bone powder and cotton fibers were mixed to prepare powder mixture (7% wt cotton fibers). The mixture was added with PVA (Polyvinyl alcohol 1500, 9002-89-5, Daejung chemical & metals Co. Ltd, Gyeonggi-do, Korea) solution (5% wt) as a binder to make a paste with the liquid to powder ratio 3:2.
The paste was packed into Teflon mold (10mm x 5mm). A mold was then placed inside the oven to dry at 100 o C for 24 h. After setting, mold was unpacked to obtain a composite block. The composite block was sintered at 600 o C for 2h under air for removing of cotton fiber. For as-prepared powder preparation, the experiment was repeated 3 times.
Phase characterization
The composition of the heat-treated composite block was done by means of powder X-ray diffraction (XRD) analysis.
After heat treatment, specimens were ground to fine powder, and the XRD patterns were recorded using a diffractometer system (D8 Advance, Bruker AXS GmbH, Karlsruhe, Germany) using Vario1 Johansson focusing mono-chromator and high flux CuKa radiation generated at 40 kV and 40 mA. The specimens were scanned from 20º to 60º 2q (where q is the Bragg angle) in a continuous mode.
The carbonate content of the specimen was measured using the CHN coder (Yanako CHN coder. MT-6, Tokyo, Japan).
Microstructure properties
Morphology of the sintered specimen was observed by a Scanning Electron Microscope (SEM: S-3400N, Hitachi High-Technologies Co., Tokyo, Japan) at 10 kV of accelerating voltage after gold sputter coating.
Total porosity was estimated based on the measurement of volume and weight. The average value was calculated from the porosity value of 5 specimens.
Mechanical properties
Mechanical strength was evaluated in terms of Diametral Tensile Strength (DTS) using Universal Testing Machine (AGS-J, Shimadzu Corporation, Kyoto, Japan). The test was done with 5 specimens. The average DTS value was calculated from the DTS value of 5 specimens. Figure 1 showed the XRD patterns of the composite block (7% wt cotton fibers) heating at 600 o C for 2h. It can be seen that the pure apatite phase could be obtained and no secondary phase was detected. The XRD patterns showed the single phase of apatite, corresponding to the ICDD standard peak of stoichiometric hydroxyapatite (standard No.09-0432). It indicated that sintering at high temperatures resulted in the sharp and narrow diffraction peaks indicating high crystallinity. Sintering at elevated temperature was effective to remove completely cotton fiber as well as enhance the mechanical strength however the bioactivity of apatite would be reduced. In this case, sintering at 600 o C was found to be suitable to eliminate the organic phase and maintain the low crystallinity. Figure 2 showed the SEM image of the facture surface of the composite block (7% wt cotton fibers) heating at 600 o C for 2h. Macro-pores were obviously formed by burning out of the nylon fiber. After removing cotton fibers, the interconnecting porous structure could have appeared. Pores are interconnecting and in the form of long tunnels mimicking the morphology of cotton fiber.
RESULTS
Total porosity was estimated by measuring the bulk volume and weight of the heated block. Composite block with 7% wt cotton fibers was about 70% in total porosity after burning. Mechanical strength was evaluated in terms of Diametral Tensile Strength (DTS). Porous CO 3 Ap had a mean DTS value of 198.5 ± 12.9 kPa.
DISCUSSION
Hence, the cotton fiber introduction was found to be effective for the formation of the porous structure. An interconnected porous network is an optimal environment for cell migration, proliferation and differentiation. Increasing of the interconnected pores supports extensive vascularization, rapid bone regeneration and good osteointegration [14][15][16][17][18][19] . It should be mentioned that porosity and mechanical strength have a trade-off relationship [20][21][22] . Therefore optimization of the porosity and mechanical strength is important for the clinical use of porous CO 3 Ap. Although the value of mechanical strength is not so high, specimen after sintering could fulfill the requirement of handling properties. This property is important since bone substitute materials need to be cut in the specific shape to fix the various shape of the bone defect.
The carbonate content of the specimen after sinter-
CONCLUSIONS
Solving the problem of waste is about value creation. In this study, a by-product of the food industry was useful for producing bone substitute materials. The results indicated that interconnecting porous carbonate apatite block could be fabricated using chicken bone powder/cotton fiber composite as starting materials. The specimen has fulfilled the requirement of handling property and has the same chemical composition as human bone. That means it could be advantageous with respect to use as bone substitutes. In the next research, in vitro and in vivo experiments need to be conducted to prove the biodegradable and biocompatible properties. | 2,123.8 | 2020-01-23T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Multimodal Conversational AI: A Survey of Datasets and Approaches
As humans, we experience the world with all our senses or modalities (sound, sight, touch, smell, and taste). We use these modalities, particularly sight and touch, to convey and interpret specific meanings. Multimodal expressions are central to conversations; a rich set of modalities amplify and often compensate for each other. A multimodal conversational AI system answers questions, fulfills tasks, and emulates human conversations by understanding and expressing itself via multiple modalities. This paper motivates, defines, and mathematically formulates the multimodal conversational research objective. We provide a taxonomy of research required to solve the objective: multimodal representation, fusion, alignment, translation, and co-learning. We survey state-of-the-art datasets and approaches for each research area and highlight their limiting assumptions. Finally, we identify multimodal co-learning as a promising direction for multimodal conversational AI research.
Introduction
The proliferation of smartphones has dramatically increased the frequency of interactions that humans have with digital content. These interactions have expanded over the past decade to include conversations with smartphones and in-home smart speakers. Conversational AI systems (e.g., Alexa, Siri, Google Assistant) answer questions, fulfill specific tasks, and emulate natural human conversation (Hakkani-Tür et al., 2011;Gao et al., 2019).
Early examples of conversational AI include those based on primitive rule-based methods such as ELIZA (Weizenbaum, 1966). More recently, conversational systems were driven by statistical machine translation systems: translating input queries to responses (Ritter et al., 2011;Hakkani-Tür et al., 2012). Orders of magnitude more data led to unprecedented advances in conversational technology in the mid-part of the last decade. Tech-niques were developed to mine conversational training data from the web search query-click stream (Hakkani-Tür et al., 2011;Heck, 2012;Hakkani-Tür et al., 2013) and web-based knowledge graphs (Heck and Hakkani-Tür, 2012;El-Kahky et al., 2014). With this increase in data, deep neuralnetworks gained momentum in conversational systems (Mesnil et al., 2014;Heck and Huang, 2014;Sordoni et al., 2015;Vinyals and Le, 2015;Shang et al., 2015;Serban et al., 2016;Li et al., 2016a,b).
One limitation of existing agents is that they often rely exclusively on language to communicate with users. This contrasts with humans, who converse with each other through a multitude of senses. These senses or modalities complement each other, resolving ambiguities and emphasizing ideas to make conversations meaningful. Prosody, auditory expressions of emotion, and backchannel agreement supplement speech, lip-reading disambiguates unclear words, gesticulation makes spatial references, and high-fives signify celebration.
Alleviating this unimodal limitation of conversational AI systems requires developing methods to extract, combine, and understand information streams from multiple modalities and generate multimodal responses while simultaneously maintaining an intelligent conversation.
Similar to the taxonomy of multimodal machine learning research (Baltrušaitis et al., 2017), the research required to extend conversational AI systems to multiple modalities can be grouped into five areas: Representation, Fusion, Translation, Alignment, and Co-Learning. Representation and fusion involve learning mathematical constructs to mimic sensory modalities. Translation maps relationships between modalities for cross-modal reasoning. Alignment identifies regions of relevance across modalities to identify correspondences between them. Co-learning exploits the synergies across modalities by leveraging resourcerich modalities to train resource-poor modalities.
Concurrently, it is necessary for the research areas outlined above to address four main challenges in multimodal conversational reasoning -disambiguation, response generation, coreference resolution, and dialogue state tracking . Multimodal disambiguation and response generation are challenges associated with fusion that determine whether available multimodal inputs are sufficient for a direct response or if follow-up queries are required. Multimodal coreference resolution is a challenge in both translation and alignment, where the conversational agent must resolve referential mentions in dialogue to corresponding objects in other modalities. Multimodal dialogue state tracking is a holistic challenge across research areas typically associated with task-oriented systems. The goal is to parse multimodal signals to infer and update values for slots in user utterances.
In this paper, we discuss the taxonomy of research challenges in multimodal Conversational AI as illustrated in Figure 1. Section 2 provides a history of research in multimodal conversations. In Section 3, we mathematically formulate multimodal conversational AI as an optimization problem. Sections 4, 5, and 6 survey existing datasets and state-of-the-art approaches for multimodal representation and fusion, translation, and alignment. Section 7 highlights limitations of existing research in multimodal conversational AI and explores multimodal co-learning as a promising direction for research.
Background
Early work in multimodal conversational AI focused on the use of visual information to improve automatic speech recognition (ASR). One of the earliest papers along these lines is by Yuhas et al. (1989) followed by many papers including work by Meier et al. (1996), Duchnowski et al. (1994), Bregler and Konig (1994, and Ngiam et al. (2011). Advances in client-side capabilities enabled ASR systems to utilize other modalities such as tactile, voice, and text inputs. These systems supported more comprehensive interactions and facilitated a higher degree of personalization. Examples include ESPRIT's MASK (Lamel et al., 1998), Microsoft's MiPad (Huang et al., 2001), and AT&T's MATCH (Johnston et al., 2002).
Vision-driven tasks motivated research in adding visual understanding technology into conversational AI systems. Early work in reasoning over text+video include work by Ramanathan et al. (2014) where they leveraged these combined modalities to address the problem of assigning names of people in the cast to tracks in TV videos. Kong et al. (2014) leveraged natural language descrptions of RGB-D videos for 3D semantic parsing. Srivastava and Salakhutdinov (2014) developed a multimodal Deep Boltzmann Machine for image-text retrieval and ASR using videos. Antol et al. (2015) introduced a dataset and baselines for multimodal question-answering, a challenge combining computer vision and natural language pro-cessing. More recent work by Zhang et al. (2019b) and Selvaraju et al. (2019) leveraged conversational explanations to make vision and language models more grounded, resulting in improved visual question answering.
While modalities most commonly considered in the conversational AI literature are text, vision, tactile, and speech, other sources of information are gaining popularity within the research community. These include eye-gaze, 3D scans, emotion, action and dialogue history, and virtual reality. Heck et al. (2013) Processing conventional and new modalities brings forth numerous challenges for multimodal conversations. To answer these challenges, we will first mathematically formulate the multimodal conversational AI problem, then detail fundamental research sub-tasks required to solve it.
Mathematical Formulation
We formulate multimodal conversational AI as an optimization problem. The objective is to find the optimal response S to a message m given underlying multimodal context c. Based on the sufficiency of the context, the optimal response could be a statement of fact or a follow-up question to resolve ambiguities. Statistically, S is estimated as: The probability of an arbitrary response r can be expressed as a product of the probabilities of responses {r i } T i=1 over T turns of conversation (Sordoni et al., 2015).
It is also possible for conversational AI to respond through multiple modalities. We represent the multimodality of output responses by a matrix R := {r 1 i , r 2 i , . . . r l i } over l permissible output modalities.
Learning from multimodal data requires manipulating information from all modalities using a function f (·) consisting of five sub-tasks: representation, fusion, translation, alignment, and co-learning. We include these modifications and present the final multimodal conversational objective below.
In the following sections, we describe each subtask contained in f (·).
Multimodal Representation + Fusion
Multimodal representation learning and fusion are primary challenges in multimodal conversations. Multimodal representation is the encoding of multimodal data in a format amenable to computational processing. Multimodal fusion concerns joining features from multiple modalities to make predictions.
Multimodal Representations
Using multimodal information of varying granularity for conversations necessitates techniques to represent high-dimensional signals in a latent space. These latent multimodal representations encode human senses to improve a conversational AI's perception of the real-world. Success in multimodal tasks requires that representations satisfy three desiderata (Srivastava and Salakhutdinov, 2014): 1. Similarity in the representation space implies similarity of the corresponding concepts 2. The representation is easy to obtain in the absence of some modalities 3. It is possible to infer missing information from observed modalities There exist numerous representation methods for the range of problems multimodal conversational AI addresses. Multimodal representations are broadly classified as either joint representations or coordinated representations (Baltrušaitis et al., 2017).
Transformer-based models used as joint multimodal representations can be described as illustrated in the taxonomy of Figure 1. Modality specific encoders {j i (·)} n i=1 embed unimodal tokens {c i k } n k=1 to create latent features {z i k } n k=1 (Equation 5). Decoder networks use latent features to produce output symbols. A transformer Ψ(·) consists of stacked encoders and decoders with intramodality attention. Attention heads compute relationships within elements of a modality, producing multimodal representations {h i k } n k=1 (Equation 6).
Coordinated Representations
In contrast, coordinated representations model each modality separately. Constraints coordinate representations of separate modalities by enforcing cross-modal similarity over concepts. For example, the audio representation g a (·) of a dog's bark would be closer to the dog's image representation g i (·) and further away from a car's (Equation 7). A notion of distance d between modalities in the coordinated space enables cross-modal retrieval.
Multimodal Fusion
Multimodal fusion combines features from multiple modalities to make decisions, denoted by the final block before the outputs in Figure 1. Fusion approaches are broadly classified into model-agnostic and model-based methods.
Model-agnostic methods are independent of specific algorithms and are split into early, late, and hybrid fusion. Early fusion integrates features following extraction, projecting features into a shared space (Potamianos et al., 2003;Ngiam et al., 2011;Nicolaou et al., 2011;Jansen et al., 2019). In contrast, late fusion integrates decisions from unimodal predictors (Becker and Hinton, 1992;Korbar et al., 2018;Akbari et al., 2021). Early fusion is predominantly used to combine features extracted in joint representations while late fusion combines decisions made in coordinated representations. Hybrid fusion exploits both low and high level modality interactions (Wu et al., 2005;Schwartz et al., 2020;Piergiovanni et al., 2020;Goyal et al., 2020).
State-of-the-art Representation+Fusion Models for Conversational AI
Having introduced the multimodal representation and fusion challenges, we present the state-of-theart in these sub-tasks for conversational AI. textual modalities using LSTMs. Nodes in the factor graph represent attention distributions over elements of each modality, and factors capture relationships between nodes. There are two types of factors -local and joint. Local factors capture interactions between nodes of a single modality (e.g., words in the same sentence), while joint factors capture interactions between different modalities (e.g., a word in a sentence and an object in an image).
Factor Graph Attention
Representations from all modalities are concatenated via hybrid fusion and passed through a multilayer perceptron network to retrieve the best candidate answer. (Schwartz et al., 2020) presents TRANSRESNET for image-based dialogue. Image-based dialogue is the task of choosing the optimal response on a dialogue turn given an image, an agent personality, and dialogue history. TRANSRESNET consists of separately learned sub-networks to represent input modalities. Images are encoded using ResNeXt 32×48d trained on 3.5 billion Instagram images (Xie et al., 2017), personalities are embedded using a linear layer, and dialogue is encoded by a transformer pretrained on Reddit (Mazaré et al., 2018) to create a joint representation.
TRANSRESNET
TRANSRESNET compares model-agnostic and model-based fusion by using either concatenation or attention networks to combine representations. Like FGA, the chosen dialogue response is the candidate closest to the fused representation.
On the first turn, TRANSRESNET uses only style and image information to produce responses. Dialogue history serves as an additional modality on subsequent rounds. Ablation of one or more modalities diminishes the ability of the model to retrieve the correct response. Optimal performance on Image-Chat is achieved using multimodal concatenation of jointly represented modalities (Table 2).
MultiModal Versatile Networks (MMV)
Alayrac et al. (2020) presents a training strategy to learn coordinated representations using selfsupervised contrastive learning from instructional videos. Videos are encoded using TSM with a ResNet50 backbone (Lin et al., 2019), audio is encoded using log MEL spectrograms from ResNet50, and text is encoded using Google News pre-trained word2vec (Mikolov et al., 2013). Alayrac et al. (2020) defines three types of coordinated spaces: shared, disjoint, and 'fine+coarse'. The shared space enables direct comparison and navigation between modalities, by assuming equal granularity. The disjoint space sidesteps navigation to solve the granularity problem by creating a space for each pair of modalities. The 'fine+coarse' space solves both issues by learning two spaces. A fine-grained space compares audio and video, while a lower-dimensional coarse-grained space compares fine-grained embeddings with text. We further discuss the MMV model in Section 6.3.
Multimodal Translation
Multimodal translation maps embeddings from one modality to signals from another for cross-modality reasoning (Figure 1). Cross-modal reasoning enables multimodal conversational AI to hold meaningful conversations and resolve references across multiple senses, specifically language and vision. To this end, we survey existing work addressing the translation of images and videos to text. We discuss multimodal question-answering and multimodal dialogue, translation tasks that extend to multimodal conversations.
Image
Antol et al. (2015) and present Visual Question-Answering (VQA) and Visual7W for multimodal question answering (MQA). The MQA challenge requires responding to textual queries about an image. Both datasets collect questions and answers using crowd workers, encouraging trained models to learn natural responses. Heck and Heck (2022) presents the Visual Slot dataset, where trained models learn answers to questions grounded in UIs.
The objective of MQA is a simplification of Equation 4 to a single-turn, single-timestep scenario (T = 1), producing a response to a question m q given multimodal context {c i } n i=1 : , 2020). Besides visual reasoning, video-QA requires temporal reasoning, a challenge addressed by multimodal alignment that we discuss in the following section.
Multimodal Alignment
While image-based dialogue revolves around objects (e.g., cats and dogs), video-based dialogue revolves around objects and associated actions (e.g., jumping cats and barking dogs) where spatial and temporal features serve as building blocks for conversations. Extracting these spatiotemporal features requires multimodal alignment -aligning sub-components of different modalities to find correspondences. We identify action recognition and action from modalities as alignment challenges relevant to multimodal conversations.
Action Recognition
Action recognition is the task of extracting natural language descriptions from videos. UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), andKinetics-700 (Carreira et al., 2019) involve extracting actions from short YouTube and Hollywood movie clips. HowTo100M (Miech et al., 2019), MSR-VTT , and YouCook2 (Zhou et al., 2017) are datasets containing instructional videos on the internet and require learning text-video embeddings. YouCook2 and MSR-VTT are annotated by hand while HowTo100M uses existing video subtitles or ASR.
Mathematically, the goal is to retrieve the correct natural language description y ∈ Y to a query video x (Equation 11). Video and text representation functions g(·) video and g(·) text embed modalities into a coordinated space where they are com-pared using a distance measure d.
Action from Modalities
Equipping multimodal conversational agents with the ability to perform actions from multiple modalities provides them with an understanding of the real world, improving their conversational utility. Talk the Walk (de Vries et al., 2018) presents the task of navigation conditioned on partial information. A "tourist" provides descriptions of a photo-realistic environment to a "guide" who determines actions. Vision-and-Dialog Navigation (Thomason et al., 2019) contains natural dialogues grounded in a simulated environment. The task is to predict a sequence of actions to a goal state given the world scene, dialogue, and previous actions. TEACh (Padmakumar et al., 2021) extends Visionand-Dialog Navigation to complete tasks in an AI2-THOR simulation. The challenge involves aligning information from language, video, as well as action and dialogue history to solve daily tasks. Ego4D (Grauman et al., 2021) contains text annotated egocentric (first person) videos in real-world scenarios. Ego4D includes 3D scans, multiple camera views, and eye gaze, presenting new representation, fusion, translation, and alignment challenges. It is associated with five benchmarks: Video QA, object state tracking, audio-visual diarization, social cue detection, and camera trajectory forecasting.
Multimodal Versatile Networks (MMV)
In addition to a representation, Alayrac et al. (2020) presents a self-supervised task to train modality embedding graphs for multimodal alignment. (Miech et al., 2020) variant of NCE measures loss on pairs of modalities of different granularity. MIL accounts for misalignment between audio/video and text by measuring the loss of fine-grained information with multiple temporally close narrations.
The network is trained on HowTo100M (Miech et al., 2019) and AudioSet (Gemmeke et al., 2017). Table 3 compares the performance of MMV on action classification, audio classification, and zeroshot text-to-video retrieval.
Discussion
The current datasets used for research in multimodal conversational AI are summarized in Table 4. While MQA and MTQA are promising starting points for multimodal natural language tasks, extending QA to conversations is not straightforward. Inherently, MQA limits itself to direct questions targeting visible content, whereas multimodal conversations require understanding information that is often implied (Mostafazadeh et al., 2017). Utterances in dialogue represent speech acts and are classified as constatives, directives, commissives, or acknowledgments (Bach and Harnish, 1979). Answers belong to a single speech act (constatives) and represent a subset of natural conversations.
Similarly, the work to-date on action recognition is incomplete and insufficient for conversational systems. Conversational AI must represent and understand spatiotemporal interactions. However, current research in action recognition attempts to learn relationships between videos and their natural language descriptions. These descriptions are not speech acts themselves. Therefore, they do not adequately represent dialogue but rather only serve as anchor points in the interaction.
In contrast, Image-Chat (Shuster et al., 2020) presents a learning challenge directly aligned with the multimodal dialogue objective in Equation 4. Image-Chat treats dialogue as an open-ended discussion grounded in the visual modality. Succeeding in the task requires jointly optimizing visual and conversational performance. The use of crowd workers that adopt personalities during data collection encourages natural dialogue and captures conversational intricacies and implicatures.
In addition, algorithmic improvements are required to advance the field of multimodal conversational AI -particularly with respect to the objective function. Current approaches such as MQA and action recognition models optimize a limited objective compared to Equation 4. We postulate that the degradation of these methods when applied to multimodal conversations is largely caused by this and, therefore, motivates investigation.
Another open research problem is to improve performance on Image-Chat. The current state-ofthe-art TRANSRESNET RET is limited. The model often hallucinates, referring to content missing in the image and previous dialogue turns. The model also struggles when answering questions and holding extended conversations. We suspect these prob-lems are a reflection of the limiting assumptions Image-Chat makes and the absence of multimodal co-learning to extract relationships between modalities. For further details, we refer readers to example conversations in Appendix A.
Different modalities often contain complementary information when grounded in the same concept. Multimodal co-learning exploits this crossmodality synergy to model resource-poor modalities using resource-rich modalities. An example of co-learning in context of Figure 1 is the use of visual information and audio to generate contextualized text representations. Blum and Mitchell (1998) introduced an early approach to multimodal co-training, using information from hyperlinked pages for web-page classification. Socher and Fei-Fei (2010) and Duan et al. (2014) presented weakly-supervised techniques to tag images given information from other modalities. Kiela et al. (2015) grounded natural language descriptions in olfactory data. More recently, Upadhyay et al. (2018) jointly trains bilingual models to accelerate spoken language understanding in low resource languages. Selvaraju et al. (2019) uses human attention maps to teach QA agents "where to look". Despite the rich history of work in multimodal co-learning, extending these techniques to develop multimodal conversational AI that understands and leverages cross-modal relationships is still an open challenge.
Conclusions
We define multimodal conversational AI and outline the objective function required for its realization. Solving this objective requires multimodal representation and fusion, translation, and alignment. We survey existing datasets and state-ofthe-art methods for each sub-task. We identify simplifying assumptions made by existing research preventing the realization of multimodal conversational AI. Finally, we outline the collection of a suitable dataset and an approach that utilizes multimodal co-learning as future steps. | 4,663.4 | 2022-05-13T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Stanene-hexagonal boron nitride heterobilayer: Structure and characterization of electronic property
The structural and electronic properties of stanene/hexagonal boron nitride (Sn/h-BN) heterobilayer with different stacking patterns are studied using first principle calculations within the framework of density functional theory. The electronic band structure of different stacking patterns shows a direct band gap of ~30 meV at Dirac point and at the Fermi energy level with a Fermi velocity of ~0.53 × 106 ms−1. Linear Dirac dispersion relation is nearly preserved and the calculated small effective mass in the order of 0.05mo suggests high carrier mobility. Density of states and space charge distribution of the considered heterobilayer structure near the conduction and the valence bands show unsaturated π orbitals of stanene. This indicates that electronic carriers are expected to transport only through the stanene layer, thereby leaving the h-BN layer to be a good choice as a substrate for the heterostructure. We have also explored the modulation of the obtained band gap by changing the interlayer spacing between h-BN and Sn layer and by applying tensile biaxial strain to the heterostructure. A small increase in the band gap is observed with the increasing percentage of strain. Our results suggest that, Sn/h-BN heterostructure can be a potential candidate for Sn-based nanoelectronics and spintronic applications.
are investigated in terms of their stability, electronic property such as band structure, density of states, real space charge density and effective mass as well as the modulation of band gap under tensile strain and varying the interlayer distance. Our proposed structures with a finite band gap and high carrier mobility would provide further insight and encouragement on the modeling of stanene-based nanoelectronic and spintronic devices.
Methods
The electronic properties of the proposed heterobilayer structure are investigated using the density functional theory (DFT) with a plane-wave basis set using the Ab initio code PWSCF package of Quantum Espresso 19 . The electron-ion interactions are accounted using norm conserving Troullier Martin pseudo-potentials 20 . To describe the electron exchange correlation energy, the generalized gradient approximation (GGA) 21 with Perdew-Burke-Ernzerhof (PBE) exchange correlation functional is implemented. In order to consider the Van der Waals (vdW) inter-molecular attractive forces, we have used DFT-D2 22 , an empirical dispersion correction proposed by Grimme 23 throughout the simulations. For the structural optimization, the plane wave basis cut off is set at 550 eV (40 Ry) with a convergence threshold on force of 10 −4 Ry/a.u. The first Brillouine zone of the unit cell is sampled with a 12 × 12 × 1 Monkhorst-Pack grid for the geometry optimization and a 15 × 15 × 1 grid for the subsequent calculations. The interaction between the two adjacent bilayers is eliminated by introducing a sufficient vacuum of 20 Å along the direction perpendicular to the surface.
The effective masses of electrons * m ( ) e and holes * m ( ) h have been calculated from the curvature of the conduction band minimum and the valence band maximum respectively at the Dirac point for the three (3) configurations of stanene/h-BN heterobilayer using the following formula: where, m* is the particle effective mass, E(k) is the dispersion relation, k is the wave vector and ћ is the reduced Planck constant.
Results and Discussions
In our study, the geometry optimized lattice constant, Sn-Sn bond length and buckling height of the stanene are 4.687 Å, 2.834 Å and 0.82 Å respectively, which is in good agreement with the reported experimental 8,24 and simulation 5,13,14,[25][26][27] studies. This is also in close agreement with the reported theoretical studies by Liu et al. 11 , Nissimagoudar et al. 28 as well as by Peng et al. 29 . Again, in our study after the geometry optimization the lattice constant of h-BN is 2.504 Å, which is also in close agreement with the previously reported studies [30][31][32][33] in the literature. The optimized lattice constant of stanene is ~47% greater than that of h-BN and therefore we have considered stanene/h-BN heterostructures composed of 4 × 4 lateral periodicity of h-BN and 2 × 2 lateral periodicity of stanene, as shown in Fig. 1 (Fig. 1(a),(b) and (c)). This supercell would experience a lattice mismatch of ~7% and therefore, stanene monolayer is stretched by~7% to ensure the commensurability of the heterostructure supercell. The basic unit cell consists of two Sn atoms, four B atoms and four N atoms. Our Sn/h-BN heterobilayer configuration can be compared to the reported literature on the graphene/MoS 2 heterobilayer structures 34,35 . A 4 × 4 supercell of MoS 2 with a 5 × 5 supercell of graphene was considered due to the 23% difference in the lattice constants between graphene (2.42 Å) and MoS 2 (3.12 Å) and the graphene layer was further stretched by ~3% to meet the commensurability criterion. Also, Chen et al. 14 investigated the electronic and optical property of graphene/stanene heterobilayers where the lattice constants of graphene and stanene differ by about 46%. Hence, they 14 proposed a supercell composed of 4 × 4 graphene and 2 × 2 stanene and further stretched the stanene layer by 4.7% to form commensurable structures.
Three stacking patterns of stanene-hexagonal boron nitride (Sn/h-BN) have been considered in this study as shown in Fig. 1(a),(b) and (c), respectively. Pattern I exhibits the heterobilayer structure with alternating Sn atoms either right over B atoms or on the top of the centers of BN hexagons. In case of pattern II, alternating Sn atoms are either over the N atoms or on the top of the h-BN centers. In case of pattern III, each of the Sn atoms lies either over the N atom or over the B atom. Similar stacking patterns are reported in the literature for the study of the heterobilayer structures such as graphene/h-BN 15,36 , graphene on copper substrate 36 , germanene/BeO 37 and graphene/stanene 14 . Figure 2 represents the binding energies per unit cell with the variation of the interlayer distance for the three considered configurations. Obtained equilibrium distances are 3.75 Å, 3.78 Å and 3.7 Å for structure I, II and III, respectively. For the benchmarking of our calculation, we have computed the optimized interlayer distance of the graphene/h-BN heterobilyer. Our computed equilibrium interlayer distance using DFT for the graphene/h-BN heterobilayer is 2.53 Å, which is in close proximity with the reported value (2.58 Å) by Ukpong et al. 32 for graphene/h-BN heterobilayer. Our obtained equilibrium distances are also comparable to the reported 38 interlayer distances (3.57 Å to 3.75 Å) for bilayer h-BN using GGA in the scheme of PBE. It is also comparable to the experimentally obtained interlayer distance of the Sn bilayers (3.5 Å ± 0.2 Å) during the epitaxial growth of stanene 8 as well as with the calculated interlayer distance of the Sn bilayers (3.6 Å) by Evazzade et al. 39 using GGA and PBE exchange correlation in the Quantum Espresso package where DFT-D modeling has been employed. On the other hand, vdW-DF2 has been reported to yield an equilibrium interlayer distance higher than the experimentally determined interlayer distance for some heterobilayers (3.77 Å for graphene-SiC 32 ). In fact, the equilibrium interlayer distance is found to be sensitive with the choice of the exchange-correlation functionals 32,40 .
Moreover, as suggested by the binding energies of the three configurations shown in Fig. 2, all the considered configurations of the stanene/h-BN heterobilayers are electronically stable while structure I having the lowest binding energy. This can be attributed to the position of the B cations right under the Sn atoms of the stanene layer where π electron density is high, thereby enhancing the cation-π attractive interaction. On the contrary, in structure II, the N anions lie directly under the Sn atoms where the anion-π repulsive interaction occurs, thereby leading to a higher binding energy 15 .
Furthermore, the calculated binding energies in our study for the Sn/h-BN heterobilayers per unit cell at equilibrium are more than 250 meV which is higher 14,41 than the binding energies of the weak vdW interactions 42 . This indicates that the stanene monolayer is bound to the h-BN substrate via interactions somewhat stronger than the weak vdW interactions 43 . The heterobilayers such as graphene/h-BN 15,36 , stanene/MoS 2 44 and graphene/SiO 2 42 are reported to have weak vdW interactions between the heterobilayers with a binding energy of less than 100 meV per unit cell. On the other hand, in the graphene/HfO 2 heterobilayer structures, the graphene is strongly bound to the HfO 2 with a binding energy of 110 meV 43 . The similar phenomenon has been reported in the study of the graphene/stanene heterobilayers 14 with a binding energy of more than 180 meV per unit cell at optimum spacing. As an aside, the optimized interlayer distances for the three stacking configurations of the Sn/h-BN heterobilayers considered in this study are 3.7 Å to 3.78 Å, which are larger than the typical bond length of the Sn-B bonds (2.24 Å) 13 and the bond length of the Sn-N bonds (2.14 Å) 13 . This indicates that, the Sn-B and the Sn-N covalent bonds are absent in the Sn/h-BN heterostructures and further conforms to the similar phenomena reported regarding the graphene/stanene 14 and the graphene/HfO 2 43 heterobilayers. For further benchmarking, we have computed the band structure of the 2D monolayer stanene without SOC as well as in the presence of SOC as shown in Fig. 3(a) and (b) respectively. As can be seen from Fig. 3(a), 2D stanene is a zero band gap material in the absence of SOC. When SOC is considered, a direct band gap of 69 meV ( Fig. 3(b)) is opened at the K point. This is in excellent agreement with the DFT simulation study of 2D monolayer stanene using GGA in the scheme of PBE by Modarresi et al. 45 (70 meV direct band gap in the presence of SOC) as well as by Xiong et al. 46 (73 meV direct band gap with SOC), thereby confirming the validity of our simulations. Furthermore, in the presence of SOC a Fermi-level band crossing phenomenon of bands is observed at gamma (Γ) point. For the band structure of stanene as well as stanene nanoribbons in the presence of SOC, at gamma point similar phenomenon of either bands touching or crossing the Fermi level has been reported by Broek et al. 5 , Modarresi et al. 45 , Lu et al. 47 , as well as by Nagarajan et al. 48 . Despite this phenomenon, in view of the opening of the band gap at K point Broek et al. 5 concluded that, pristine stanene offers perspectives of an all-Sn field effect transistor (FET). However, as discussed by Xiong et al. 46 and Nagarajan et al. 48 , the effect of SOC on the band structure of stanene and the Fermi level touching or crossing phenomenon indicates its possibility of applications in spinelectronics and quantum spin Hall fields. Hence, the overall effect of SOC in the band structure of stanene and stanene based nanostructures requires consideration in terms of their promising application in nanoelectronics, spinelectronics, quantum spin Hall insulators as well as field effect transistor. Figure 4 shows the band structure of the stanene/h-BN heterobilayer without considering the effect of SOC for the three optimized configurations. The Fermi energy level is set to 0. The band structure patterns are almost similar for all the configurations and an opening of direct band gap of 25 meV for structure I, 33 meV for structure II and 30 meV for structure III can be observed at the K point. The magnified band structures around K point near the Fermi levels are also shown at the inset. As shown in the figure, linear Dirac dispersion relation is well preserved in all the three structures. Without SOC, pristine stanene shares the similar Dirac feature and no band gap is observed while the studied stanene/h-BN heterobilayer shows a direct band gap that indicates its possible application in nanoelectronic devices such as FETs.
With the inclusion of SOC, the observed band gap for the structure I, II and III are 74 meV, 109 meV and 90 meV, respectively as shown in Fig. 5. The estimated band gaps including the effect of SOC are higher in magnitude with no significant change in the band structures near the K point as can be observed from the magnified plots in Fig. 5. This can be attributed to the effect of SOC on the heavy Sn atoms. In fact, group IV materials such as silicon (Si), tin (Sn) and lead (Pb) with single layer hexagonal configurations show SOC-induced band gaps and this effect gets stronger with the increasing atomic number of heavier atoms 49 . However, at gamma point (Γ) a Fermi-level band crossing phenomenon of bands can be observed which is similar to pristine stanene with the effect of SOC indicating the possibility of applying Sn/h-BN heterobilayers in the application of spinelectronics and quantum spin Hall insulators 46,48 .
The linear Dirac disperison near K point suggests a high charge carrier mobility. To further analyze this fact, we have calculated the effective mass of electrons * m ( ) e and holes * m ( ) h at the Dirac point for the three (3) configurations of stanene/h-BN heterobilayer. For structure III, the calculated effective mass for the electron ⁎ m ( ) e and the hole ⁎ m ( ) h are 0.0507 m 0 and 0.0543 m o respectively, which are small enough to provide high carrier mobility. The computed electron effective masses for the other two configurations are 0.054 m 0 (structure I) and 0.0513 m 0 (structure II). On the other hand, the calculated Fermi velocities are 0.538 × 10 6 ms −1 , 0.526 × 10 6 ms −1 and 0.538 × 10 6 ms −1 for structure I, II and III, respectively which are in the similar order of magnitude to that of stanene 5 (0.97 × 10 6 ms −1 ) and graphene 50 (10 6 ms −1 ).
Due to the interaction between the stanene and the h-BN layer, the sublattice symmetry of the zero band gap hexagonal stanene structure is broken, therefore opening a direct band gap. Similar phenomenon has been reported by Xiong et al. 51 for the stanene/MoS 2 heterostructure where the strong interaction breaks the sublattice symmetry of stanene and opens a band gap. Similarly, the strong interaction with the BeO monolayer substrate introduces a finite band gap in the germanene/BeO heterostructure by breaking the symmetry of the germanene 37 . Also, due to the symmetry breaking of the two carbon atoms in the hexagonal graphene induced by the interaction with the h-BN monolayer, a finite band gap of ~50 meV opens as reported in the literature 15,36 . Next, to analyze the electronic properties and gain deeper insight on the interlayer interactions, we have plotted the total as well as the projected density of states (PDOS) for structure III of the stanene/h-BN heterobilayer as shown in Fig. 6. As all the considered configurations of the Sn/h-BN heterobilayer are found to show similar band structure along with nearly same energy band gap, effective mass as well as Fermi velocity, hence we show the results for structure III only as a representative of Sn/h-BN heterobilayer. As depicted in Fig. 6(a), the peaks of the PDOS in the conduction band (0 to 2 eV) as well as in the valence band (−2 to 0 eV) are dominantly contributed by stanene. Again, molecular orbital resolved density of states presented in Fig. 6(b) reveals the dominant role of the π orbital of stanene in the conduction and valence bands, which is similar to the characteristics of the isolated monolayer stanene 5 . Furthermore, as shown in Fig. 6, unlike hybrid systems with high chemical interactions 52 , h-BN orbitals do not hybridize with stanene orbitals near the Fermi level ensuring non-intensive interactions of the two layers. This is further supported by the real space charge density distribution of the conduction and valence bands for structure III as shown in Fig. 7. The localization of stanene in the conduction as well as in the valence band of the heterobilayer structure further confirms the dominant role of stanene in shaping the electronic properties of the stanene/h-BN heterostructure. Therefore, electronic carriers are expected to transport only through the stanene layer, thereby leaving the h-BN layer to be a good choice as a substrate for the heterostructure. The localization of stanene in the conduction band and the valence band of the Sn/h-BN heterobilayer structure in this study can be compared to the heterostructures formed by other group-IV mono atomic 2D compounds such as graphene and silicene with h-BN working as a substrate. As reported by Balu et al. 18 , the band structure for the graphene/BN bilayer is dominated by the bands associated with the carbon atoms near the Fermi level, with a band gap of ~100 meV at the Dirac point. Similarly, silicene is reported 53 to dominate the electronic transport in the silicene/h-BN heterobilayers and the Dirac cone of the silicene layer is preserved after contacting the BN layer, which is in accordance with our study for the stanene/h-BN heterobilayers. In fact, the heterostructures consisting of single layer silicene and graphene sandwiched between h-BN bilayers also show a band gap opening and high carrier mobility while the silicene and graphene layer remain nearly unaffected by h-BN 54,55 . In addition, the periodic heterostructures with a graphene sheet sandwiched between four h-BN layers also show similar phenomenon 56 .
In this section, we focus our study to tune up the band gap of the Sn/h-BN heterostructure for its possible use in the high performance nanoelectronic and spintronic devices.
Firstly, we have calculated the energy band gaps with the variation of the interlayer spacing of the stanene and h-BN layers for all three considered configurations as presented in Fig. 8. With the increasing interlayer distance, energy band gap decreases. As the interlayer distance increases, the interaction between the h-BN and stanene layer decreases. As a result, the stanene layer tends to get back to its original symmetry 57,58 therefore reducing the band gap. This is in line with the study of graphene/h-BN heterobilayers 15 of various stacking patterns where the band gaps increase with the decrease in interlayer spacing. Figure 9 shows the variation in the band gap for structure III under external strain of varying percentage. The strain ε ( ) percentage is defined as the following: where, a is the unit cell parameter. For the applied biaxial strain, the structural symmetry remains almost same as the unstrained case thereby causing a little increase in the band gap as shown in Fig. 9(a). The small changes in the effective mass and the Fermi velocities are also presented in Fig. 9(a). With the increasing percentage of strain, the Fermi level shifts downward and a self hole doping characteristics with hole carriers appear, which is evident in Fig. 9(b). In fact, at a strain of 4%, the Fermi level touches the valence band and further shifts downward at a maximum strain of 7%. This is also in line with the study for the stanene monolayer 5 . On the other hand, the reverse phenomenon i.e. the upward shift of Fermi level is expected for the compressive strain. Finally, we have investigated one non-high symmetry configuration for the Sn/h-BN heterostructure as shown in Fig. 10(a). For the considered non-high symmetry supercell structure, the supercell is made up of √7 × √7 stanene and 5 × 5 h-BN and the stanene layer is rotated by .°21 5 with respect to the h-BN layer which ensures the commensurability condition in the supercell. The electronic band structure for the non-high symmetry configuration is shown in Fig. 10 (b). A direct band gap of ~20 meV can be observed at M point which is comparable to the obtained band gap of ~30 meV for the high symmetry configurations of the Sn/h-BN heterobilayer in this study. Hence, it is reasonable to conclude that, non-high symmetry configurations will not substantially reduce the band gap of the Sn/h-BN heterobilayers. This is in accordance with the findings for graphene/h-BN heterobilayers as reported by Giovannetti et al. 36 .
Conclusion
In summary, we have investigated the structural and electronic properties of the Sn/h-BN heterobilayers from DFT studies considering van der Waals interaction between two layers. We have considered three different stacking patterns and all of these are found to be electrically stable in terms of the high binding energy. The band structures of the Sn/h-BN heterobilayer shows a direct band gap of about 30 meV at the Fermi energy level while linear Dirac dispersion relation is closely maintained. The real space charge distribution and density of states of the heterostructure suggest the localization of stanene π orbital in the conduction as well as in the valence bands, further confirming the superior role of stanene in shaping the electronic properties of the stanene/h-BN heterostructure. Moreover, our calculated small effective mass and the emergence of Dirac cone suggest high charge carrier mobility for the Sn/h-BN heterostructure. Furthermore, our analysis concludes that, the band gap can be effectively tuned by varying the interlayer spacing between the Sn and h-BN monolayer while preserving the stability. A small increase in the band gap is observed with the increasing percentage of tensile biaxial strain. At a higher percentage of strain, the Fermi level shifts downward and self hole doping characteristics with hole carriers appear. Such unique and tunable electronic properties of the Sn/h-BN heterobilayer would further encourage the Sn-based nanoelectronic and spintronic device applications. | 5,041.4 | 2017-11-27T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Standardization of Sequencing Coverage Depth in NGS: Recommendation for Detection of Clonal and Subclonal Mutations in Cancer Diagnostics
The insufficient standardization of diagnostic next-generation sequencing (NGS) still limits its implementation in clinical practice, with the correct detection of mutations at low variant allele frequencies (VAF) facing particular challenges. We address here the standardization of sequencing coverage depth in order to minimize the probability of false positive and false negative results, the latter being underestimated in clinical NGS. There is currently no consensus on the minimum coverage depth, and so each laboratory has to set its own parameters. To assist laboratories with the determination of the minimum coverage parameters, we provide here a user-friendly coverage calculator. Using the sequencing error only, we recommend a minimum depth of coverage of 1,650 together with a threshold of at least 30 mutated reads for a targeted NGS mutation analysis of ≥3% VAF, based on the binomial probability distribution. Moreover, our calculator also allows adding assay-specific errors occurring during DNA processing and library preparation, thus calculating with an overall error of a specific NGS assay. The estimation of correct coverage depth is recommended as a starting point when assessing thresholds of NGS assay. Our study also points to the need for guidance regarding the minimum technical requirements, which based on our experience should include the limit of detection (LOD), overall NGS assay error, input, source and quality of DNA, coverage depth, number of variant supporting reads, and total number of target reads covering variant region. Further studies are needed to define the minimum technical requirements and its reporting in diagnostic NGS.
The insufficient standardization of diagnostic next-generation sequencing (NGS) still limits its implementation in clinical practice, with the correct detection of mutations at low variant allele frequencies (VAF) facing particular challenges. We address here the standardization of sequencing coverage depth in order to minimize the probability of false positive and false negative results, the latter being underestimated in clinical NGS. There is currently no consensus on the minimum coverage depth, and so each laboratory has to set its own parameters. To assist laboratories with the determination of the minimum coverage parameters, we provide here a user-friendly coverage calculator. Using the sequencing error only, we recommend a minimum depth of coverage of 1,650 together with a threshold of at least 30 mutated reads for a targeted NGS mutation analysis of ≥3% VAF, based on the binomial probability distribution. Moreover, our calculator also allows adding assay-specific errors occurring during DNA processing and library preparation, thus calculating with an overall error of a specific NGS assay. The estimation of correct coverage depth is recommended as a starting point when assessing thresholds of NGS assay. Our study also points to the need for guidance regarding the minimum technical requirements, which based on our experience should include the limit of detection (LOD), overall NGS assay error, input, source and quality of DNA, coverage depth, number of variant supporting reads, and total number of target reads covering variant region. Further studies are needed to define the minimum technical requirements and its reporting in diagnostic NGS.
INTRODUCTION
Next-generation sequencing (NGS) has rapidly expanded into the clinical setting in haemato-oncology and oncology, as it may bring great benefits for diagnosis, selection of treatment, and/or prognostication for many patients (1). Recently, several articles about the validation of deep targeted NGS in clinical oncology were published (2,3), including a comprehensive recommendation by the Association for Molecular Pathology and the College of American Pathologists (1). However, the lack of standardization of targeted NGS methods still limits their implementation in clinical practice (4).
One challenge in particular is the correct detection of mutations present at low variant allele frequencies (VAF) and standardization of sequencing coverage depth (1,5,6). This is especially important for mutations that have clinical impacts at subclonal frequencies (1) such as the case of TP53 gene mutations (TP53mut) in chronic lymphocytic leukemia (CLL) (7,8). TP53 aberrations (TP53mut and/or chromosome 17p deletion) are among the strongest prognostic and predictive markers guiding treatment decisions in CLL (9). Nowadays, the European Research Initiative on Chronic Lymphocytic Leukemia (ERIC) recommends detecting TP53mut with a limit of detection (LOD) of at least 10% VAF (10), and a growing body of evidence exists dedicated to the clinical impact of small TP53 mutated subclones in CLL (7,8).
Sanger sequencing and deep targeted NGS are currently the techniques most used for TP53mut analysis (10) as well as for analysis of other genes with clinical impacts at low allele frequencies. Although Sanger sequencing provides a relatively accessible sequencing approach, it lacks the sensitivity needed to detect subclones due to its detection limit of 10-20% of mutated alleles (10). NGS-based analysis has thus gained prominence in diagnostic laboratories for the detection of somatic variants and various technical developments of error correction strategies, both computational and experimental, are being developed for the accurate identification of low-level genetic variations (11). We therefore address the importance of the correct determination of sequencing depth in diagnostic NGS in order to obtain a confident and reproducible detection, not only of low VAF variants. Finally, we performed a dilution experiment to confirm our theoretical calculations, and we close by discussing our experience with diagnostic detection of TP53mut in CLL patients and further perspectives about NGS standardization in cancer diagnostics.
NGS SEQUENCING DEPTH AND ERROR RATE
NGS sequencing depth directly affects the reproducibility of variant detection: the higher the number of aligned sequence reads, the higher the confidence to the base call at a particular position, regardless of whether the base call is the same as the reference base or is mutated (1). In other words, individual sequencing error reads are statistically irrelevant when they are outnumbered by correct reads. Thus, the desired coverage depth should be determined based on the intended LOD, the tolerance for false positive or false negative results, and the error rate of sequencing (1,11).
Using a binomial distribution, the probability of false positive and false negative results for a given error rate as well as the intended LOD can be calculated, and the threshold for a variant calling for a given depth can be estimated (1). For example, given a sequencing error rate of 1%, a mutant allele burden of 10%, and a depth of coverage 250 reads, the probability of detecting 9 or fewer mutated reads is, according to the binomial distribution, 0.01%. Hence, the probability of detecting 10 or more mutated reads is 99.99% (100-0.01%), and the threshold for a variant calling can be defined. In other words, a coverage depth of 250 with a threshold of at least 10 mutated reads will have a 99.99% probability that 10% of the mutant allele load will not be missed by the variant calling (although it can be detected in a different proportion). In this way, the risk of a false negative result is greatly minimized. On the other hand, the probability of false positives heavily depends on the sequencing error rate (as the accuracy of all analytical measurements depends on the signal-to-noise ratio) (1,11). In our example, the probability of a false positive result is 0.025%; however, the rate of false positives is not negligible when decreasing the LOD to the value close to the error rate. Conventional intrinsic NGS error rates range between 0.1 and 1% (Phred quality score of 20-30) (1, 11) depending on the sequencing platform, the GC content of the target regions (12), and the fragment length, as shown in Illumina paired-end sequencing (13). Therefore, the detection of variants at VAFs <2% is affected by a high risk of a false positive result, regardless of the coverage depth. It is also important to mention that the sequencing error rate applies only for errors produced by sequencing itself and does not include other errors introduced during DNA processing and library preparation, particularly during amplification steps, which further increase error rates (1,11).
MINIMUM SEQUENCING COVERAGE IN CLINICAL SETTINGS
There is currently no consensus on the minimum required coverage in a clinical setting using deep targeted resequencing by NGS, and so each laboratory has to set its own parameters in order to meet sufficient quality (1,5). To date, only a few studies have recommended the minimum coverage criteria for deep targeted NGS in clinical oncology: 500 depth of coverage and a LOD of 5% (2), 300-500 depth of coverage without defying the LOD (3), 250 depth and a LOD of 5% with threshold adjustment to 1,000 depth of coverage is required in cases of heterogeneous variants in low tumor cellularity samples (1), and 100 depth with at least 10 variant reads and a LOD of 10% (10). According to the binominal data distribution, a coverage depth of 250 should indeed be sufficient to detect 5% VAF with a threshold of variant supporting reads ≥5 (Figure 1). On the other hand, NGS analysis with a coverage depth of 100 along with a requirement of at least 10 variant supporting reads as recommended by the ERIC consortium (10) would result in a false negative of 45% for samples with a LOD of 10%. To confirm these theoretic calculations, we performed two independent dilution experiments to estimate the performance of TP53 NGS analysis to detect 10% VAF at a depth of coverage of 100 reads. Indeed, we detected 30% of false negatives (5 positive samples of 7 true-positive samples and 9 positive samples of 13 true-positive samples) in two independent sequencing runs. Unfortunately, the false negative rate is often underestimated in targeted resequencing. Also, a recent study investigating inter-laboratory results of somatic variant detection with VAFs between 15 and 50% in 111 laboratories with reported LODs of 5-15% (6) shows that major errors in diagnostic NGS may arise from false negative results, even in samples with high mutation loads (6). Of three concurrent false positive results, all variants were correctly detected but mischaracterised (6). Since laboratories have not been asked to report coverage depth for other regions than the identified variants (6), we may only assume that low coverage or high variant calling thresholds contributed to the false negative results. These results further highlight the need for standardized coverage depth parameters in diagnostic NGS, taking into account sequencing errors as well as assayspecific errors.
FREQUENCY OF TP53 SUBCLONAL MUTATIONS IN CLL DETECTED THROUGH DIAGNOSTIC NGS
In order to evaluate the occurrence of low VAF in real-world settings, we reviewed our cohort of CLL patients examined for TP53mut in our diagnostic laboratory. The TP53mut were assessed as reported previously (14,15). Briefly, TP53 (exons 2-10 including 2 bp intronic overlap, 5 ′ and 3 ′ UTR) was analyzed using 100 ng gDNA per reaction. Amplicon-based libraries were sequenced as paired-end on MiSeq (2x151, Illumina) with minimum target read depths of 5,000x. The LOD of TP53mut was set up to 1%, and the variants in the range 1-3% were confirmed by replication. Written informed consent was obtained from all the patients who were enrolled in accordance with the Helsinki Declaration, and the study was approved by the local ethical committee.
Of the diagnostic cohort of 859 CLL patients (April 2016-April 2019), 25% (215/859) were positive for TP53mut, and of those, 52.6% (113/215) carried variants with VAF at 10% or lower. In line with our observations, a recent study (8) reported the presence of 63 and 84% low burden (Sanger negative) TP53muts in CLL patients at the time of diagnosis and at the time of treatment, respectively, and confirmed the negative impact on the overall survival of TP53muts above 1% VAF at the time of treatment (8).
CALCULATOR FOR DIAGNOSTIC NGS SETTINGS FOR DETECTION OF SUBCLONAL MUTATIONS
To assist laboratories with the determination of the minimum proper coverage parameters, we are providing a simple, userfriendly theoretical calculator (software) based on the binomial distribution (Figure 2), described in the Supplementary File. A web (or desktop) application and stand-alone source codes in R are accessible on Github: https://github.com/mvasinek/olgencoverage-limit. Using this calculator, the correct parameters of sequencing depth and the corresponding minimum number of variant reads for a given sequencing error rate and intended LOD can easily be determined. Moreover, users can also take into account other errors by simply adding assay-specific errors to the sequencing error rate and using this overall error as an input to the calculator. For example, in our case of TP53 mutational analysis we calculated with the overall error of ∼1.16%, thus we set up our minimum coverage depth requirements to 2,000 with threshold of minimum 40 reads for 3% VAF.
DISCUSSION
Although diagnostic NGS has gained prominence in clinical settings for the assessment of somatic mutations in cancer, insufficient standardization of sequencing parameters still limits its implementation in clinical practice (1), mainly for variants present at low allele frequencies (4). We, therefore, addressed the technical question of correctly determining the sequencing depth in diagnostic NGS in order to obtain confident and reproducible detections of low VAF variants. In particular, we performed theoretical calculations to determine the optimum depth of coverage for the desired probability of detection of variants at low allele frequencies, taking into account the sequencing error rate. Moreover, we confirmed these theoretical calculations by conducting dilution experiments. Based on these observations, we recommend a depth of coverage of 1,650 or higher (together with the respective threshold of at least 30 mutated reads) to call ≥3% variants to achieve a 99.9% probability of variant detection, using the conventional NGS sequencing error only.
Variants in the 1-3% VAF range can only be called if the obtained sequence data is of high quality (average Q30 > 90%) and/or when the variants are confirmed by replication or the orthogonal method (1,11,16). We are also providing a simple, user-friendly theoretical calculator (software) to assist laboratories with resolving the correct sequencing depth and the corresponding minimum number of variant reads while taking into account the sequencing error rate. Our simple calculator may help to minimize the false positive and false negative results in diagnostic NGS.
Nevertheless, correct sequencing depth is also influenced by assay-specific factors (1). Errors can occur at many stages during DNA processing and library preparation. The most common are amplification errors introduced during NGS library preparation (1,12,17). Other common sources of errors have to do with library complexity (the number of independent DNA molecules analyzed), DNA quality, and target region complexity etc. All potential assay-specific errors should be addressed through test design, method validation, and quality control.
Currently, emerging error correction strategies, both computational and experimental, are being developed in order to mitigate the high error rates in diagnostic NGS (11). So far, among the most promising error correction methods are UMI (Unique Molecular Identifiers), which correct for PCR errors (18), and signal-to-noise correction approaches (11). These advances attempt to reduce the LOD, thereby increasing sequencing accuracy needed for future opportunities in NGS diagnosis.
In order to improve the standardization in diagnostic NGS, the estimation of correct coverage depth is a recommended starting point when assessing thresholds surrounding a particular NGS assay. Nevertheless, there is still lack of published guidance regarding the minimum technical requirements and its reporting in NGS, particularly important in detection of clonal and subclonal mutations in cancer diagnostics. This is mainly due to the broad range of library preparation approaches, and numerous variables playing a role in each specific NGS assay, that are difficult to standardize, together with inter-laboratory variability. Therefore, the definition of minimum technical requirements and its reporting in NGS is highly desirable. Based on our experience in diagnostic NGS in haemato-oncology, we suggest to report at least following technical parameters: LOD, overall error of NGS assay (or at least sequencing error rate), the amount of DNA input, source, and quality of DNA, minimum coverage depth and the percentage of targeted bases sequenced at this minimum depth, total number of target reads covering variant region and number of reads supporting the variant. Special emphasis should be given to NGS standardization of the formalin-fixed paraffin-embedded (FFPE) samples (19,20).
Taken together, our study highlights the importance of correct sequencing depth and the minimum number of reads required for reliable and reproducible detection of variants with low VAF in diagnostic NGS. The calculation of correct sequencing depth for a given error rate using our user-friendly theoretical calculator (software) may help to minimize the false positive and false negative results in diagnostic NGS, in situations related to subclonal mutations among others. The rigorous testing and standardized minimum requirements for diagnostic NGS is particularly desirable to ensure correct results in clinical settings.
DATA AVAILABILITY
The datasets generated for this study are available on reasonable request to the corresponding author.
AUTHOR CONTRIBUTIONS
AP and EK designed the study, interpreted the results, and wrote the manuscript. AP, LS, TD, and PS performed NGS analysis. TP collected the patient samples and clinical data. MV performed bioinformatics analysis and wrote the calculator code. TN prepared web application. All authors read and approved the final version of manuscript. | 3,925.6 | 2019-09-04T00:00:00.000 | [
"Biology"
] |
The galactic acceleration scale is imprinted on globular cluster systems of early-type galaxies of most masses and on red and blue globular cluster subpopulations
Context. Globular clusters carry information about the formation histories and gravitational fields of their host galaxies. Bílek et al. (2019, BSR19 hereafter) reported that the radial profiles of volume number density of GCs in GC systems (GCS) follow broken power laws, while the breaks occur approximately at the a 0 radii. These are the radii at which the gravitational fields of the galaxies equal the galactic acceleration scale a 0 = 1 . 2 × 10 − 10 ms − 2 known from the radial acceleration relation or the MOND theory of modified dynamics. Aims. Our main goals here are to explore whether the results of BSR19 hold true for galaxies of a wider mass range and for the red and blue GCs sub-populations. Methods. We exploited catalogs of photometric GC candidates in the Fornax galaxy cluster based on ground and space observations and a new catalog of spectroscopic GCs of NGC1399, the central galaxy of the cluster. For every galaxy, we obtained the parameters of the broken power law density by fitting the on-sky distribution of the GC candidates, while allowing for a constant density of contaminants. The logarithmic stellar masses of our galaxy sample span 8 . 0 − 11 . 4 M ⊙ . Results. All investigated GCSs with a su ffi cient number of members show broken power-law density profiles. This holds true for the total GC population and the blue and red subpopulations. The inner and outer slopes and the break radii agree well for the di ff erent GC populations. The break radii agree with the a 0 radii typically within a factor of two for all GC color subpopulations. The outer slopes correlate better with the a 0 radii than with the galactic stellar masses. The break radii of NGC1399 vary in azimuth, such that they are greater toward and against the direction to NGC1404, that tidally interacts with NGC1399.
Introduction
Globular clusters (GCs) are compact (a few parsecs), massive (10 4 −10 6 M ⊙ ) star systems found in nearly all galaxies.A galaxy similar to the Milky Way has a few hundreds of them, while giant ellipticals can have more than ten thousand GCs.The colors of many galaxies form a bimodal distribution, with rather universal positions for the two peaks.Therefore, GCs are divided into two types -the metal-poor "blue GCs" and the metal-rich "red GCs" (Brodie & Strader 2006;Cantiello et al. 2020).Red GCs generally follow the kinematics of the stars in a galaxy, with a similar rotational velocity and velocity dispersion.In contrast, blue GCs often show complex kinematics (Schuberth et al. 2010;Coccato et al. 2013;Chaturvedi et al. 2022).The spatial distribution of GCs around galaxies is more centrally concentrated for the red GCs than for the blue GCs.The distinct properties of the red and blue GCs point toward their different formation pathways (Ashman et al. 1995;Peng et al. 2006;Brodie & Strader 2006).It seems that the blue GCs are added to massive galaxies via ac-cretion of low-mass galaxies, while most red GCs form in situ, together with the stars of the host galaxy (Côté et al. 1998;Harris 2001;Tonini 2013;Renaud et al. 2017).Globular cluster systems thus carry information about the assembly history of their host galaxies (Peng et al. 2008;Brodie et al. 2014;Harris et al. 2016).The number of GCs that a galaxy hosts is proportional to the expected mass of its dark matter halo (Spitler & Forbes 2009;Harris et al. 2015).The kinematics and distribution of GC systems (GCSs) reflect the profiles of the gravitational fields of their host galaxies (Samurović 2014(Samurović , 2016;;Alabi et al. 2017;Bílek et al. 2019b).
In the paper by Bílek et al. (2019a) (BSR19 hereafter), an interesting new property of GCSs of early-type galaxies was noted.They parametrized 1 the volume number density of GCs in a GCS, ρ by a broken power law as ρ(r) = ρ 0 r a for r < r br , ρ(r) = ρ 0 r a−b br r b for r ≥ r br , where r is the galactocentric radius.The parameter r br was called the break radius.The authors found that the break radius coincides well with the a 0 radius, that is the radius at which the expected gravitational acceleration generated by the baryons of the galaxy equals the galactic acceleration scale a 0 = 1.2 × 10 −10 m s −2 .The values of the break radii did not agree with the values of the other characteristic lengths of the galaxies, such as stellar effective radii or dark halo scale radii.These other lengths were either several times bigger or smaller than the break radii, at least for a substantial fraction of the galaxy sample (see their Table 1).
The galactic acceleration scale is known best from the behavior of the observed gravitational fields of galaxies (e.g., Lelli et al. 2017;Li et al. 2017).In the regions of galaxies where the gravitational acceleration expected by Newtonian gravity from the distribution of baryons g N is greater than a 0 , the observed gravitation acceleration equals g N , meaning that the Newtonian dynamics does not require dark matter.On the other hand, in the regions where g N is lower than a 0 , the observed gravitational acceleration is very close to √ g N a 0 .The same rules apply even for many, or perhaps all, GCs (Scarpa & Falomo 2010;Scarpa et al. 2011;Ibata et al. 2011;Sanders 2012;Hernandez et al. 2012;Hernandez & Lara-D.I. 2020).This behavior was initially predicted by the modified Newtonian dynamics (MOND), which is a class of modified gravity and inertia theories (Milgrom 1983c).Here we assume MOND to be a modified gravity theory.It predicts that the gravitational acceleration in spherical isolated objects is (Milgrom 1983c(Milgrom , 2010;;Famaey & McGaugh 2012) g M = g N ν(g N /a 0 ). (2) The function ν is not known exactly, but it must have the limit behavior ν(x) ∼ x −1/2 for x ≪ 1, and ν(x) ∼ 1 for x ≫ 1.This gives rise to two regimes of a gravitational field around a galaxy: the strong field, the so-called Newtonian regime, and the weak field, the so-called deep-MOND regime.The observed counterpart of Eq. 2 is known as the radial acceleration relation (McGaugh et al. 2016).
Apart from the radial acceleration relation, MOND predicted or explained many other observational laws (Milgrom 1983c,a,b), all of which contain the constant a 0 .This is the case for the baryonic Tully-Fisher relation (McGaugh et al. 2000;Lelli et al. 2019), the Faber-Jackson relation (Faber & Jackson 1976;Famaey & McGaugh 2012), and the radial acceleration relation, which connect the mass or mass distribution of galaxies to the velocities of stars and gas in them.The Fish law (Fish 1964;Allen & Shu 1979) and Freeman limit (Freeman 1970;McGaugh et al. 1995;Fathi 2010;Famaey & McGaugh 2012) give upper limits on the surface brightness for elliptical and spiral galaxies, respectively, above which galaxies are rare.Recently, there appeared a MOND explanation (Milgrom 2021) of the Fall relation (Fall 1983;Posti et al. 2018), which connects the mass and specific angular momentum of galaxies.The law of the universal surface density of the cores of the putative dark matter halos (Kormendy & Freeman 2004;Donato et al. 2009;Salucci et al. 2012) can be explained by MOND as well (Milgrom 2009).
Finally, there are interesting numerical coincidences of a 0 with the constants of cosmology (Milgrom 1983c(Milgrom , 2020)).If we denote H 0 the Hubble constant, c the speed of light, G the gravitational constant, R H the size of the cosmic horizon, and M H the total mass inside the cosmic horizon, then we find the order-ofmagnitude equalities a 0 ≈ cH 0 ≈ c 2 Λ 1/2 ≈ c 2 /R H ≈ c 4 /GM H .There is no clear explanation of these coincidences yet (Navarro et al. 2017;Milgrom 2020).
The finding of BSR19, of the equality of the break and a 0 radii, is thus another case of the many occurrences of the constant a 0 in extragalactic astronomy.More precisely, in this work we consider two types of a 0 radii: i) the one where the acceleration calculated from the distribution of baryons and Newtonian gravity equals a 0 and ii) where the acceleration calculated for MOND gravity via Eq. 2 equals a 0 .For most galaxies, the two a 0 radii are numerically similar.Therefore, in this paper, if we do not specify whether we are referring to Newtonian or MOND a 0 , then we mean that the statement is valid for both options.
The theoretical explanation for why the a 0 radii coincide with the break radii of GCSs has not been clarified yet, even if some initial proposal were given in BSR19.Importantly, according to one of the proposed explanations that involves the Newtonian gravity and dark matter, the match of the a 0 and break radii is of practical importance.It had been found before that the number of GCs that a galaxy has is proportional to the mass of its dark matter halo (Spitler & Forbes 2009;Harris et al. 2015).The new finding allows one to estimate the scale radius of the halo: the break radius should be located at the radius where the gravitational attraction of the stars of the galaxy equals that of the dark matter halo.One can thus solve the equation of the equity of the accelerations to obtain the scale radius of the halo.
The paper BRS19 left several important questions open, which we aim to answer here.We investigate the distribution of GCs primarily using photometric data for early-type galaxies in the Fornax galaxy cluster, but we also analyze new spectroscopic data for two galaxies.The galaxy sample of BSR19 spanned only about one order of magnitude in stellar mass.The data investigated here allow us to verify that the match between the a 0 and break radii holds true for early-type galaxies spanning three orders of magnitude -from dwarfs of the mass of the Magellanic clouds to the brightest cluster galaxies.The distribution of GCs was investigated in BSR19 on the basis of the catalog of spectroscopically confirmed GCs.This approach can lead to distorted results, because spectroscopic surveys usually are spatially incomplete.This could have affected the derived parameters of the broken power-law profile.The verification of the match of the a 0 and break radii in the new data thus removes the shade of doubt that was left about the results of the previous work.The data in BSR19 do not allow the density profiles of the red and blue GC subpopulations to be investigated.We do this here and find that there are no statistically significant trends of the profile parameters with the color of the GCs.We exploit the new data for a further exploration of the profiles of GCSs.In particular, we investigate whether the parameters of the broken power-law profiles correlate with each other and with the parameters of the host galaxy.We also investigate in detail the GCS of NGC 1399, that is the central galaxy of the Fornax cluster.We find that its break radius depends on the relative velocity of the GCs with respect to the center of the galaxy, and that the break radius varies as a function of the position angle.In this paper, we also aim to explain the reason of why the GCSs of our galaxies have the broken power-law density profiles and why the break radii coincide with the a 0 radii.Several explanations were proposed in BSR19, and here we add a few more.Then we make the first steps toward finding which of them is correct.None seem perfect at this point.This paper is organized as follows.In Sect. 2 we describe the observational data that we analyze here.The methods to extract and fit the radial profiles of volume number densities of GCs in the GCSs of the investigated galaxies are detailed in Sect.3. The derivation of the a 0 radii for our sample galaxies is explained in Sect. 4. We present our results in Sect. 5.In particular, in Sect.5.1 we compare the structural parameters of GCSs for the total GC population and the red and blue subpopulations.In Sect.5.2 we investigate the correlations of the structural parameters between each other and with the characteristics of the host galaxies and, finally, we explore the relation between the break and a 0 radii in Sect.5.3.We explore the details of the distribution of GCs of NGC 1399 in Sect.6.We explore the credibility of some potential explanations of the approximate coincidence of the a 0 and break radii in Sect.7. Finally we synthesize and summarize our findings in Sect.8.In this work, we denote the natural logarithm by log and the logarithm of the base m by log m .We assume the distance of the Fornax cluster and all the investigated galaxies to be 20.0Mpc (Blakeslee et al. 2009).This corresponds to the angular scale of 5.8 kpc per arcminute.
Data analyzed
The galaxies investigated in this paper are member galaxies of the Fornax galaxy cluster.They are listed in Table 1.Low-mass Fornax cluster galaxies (M * < 10 9.5 M ⊙ ) have too few GCs for constructing the density profiles of their GCSs individually.Therefore, we stacked the GC candidatess of many low-mass galaxies in three mass bins in order to get their average GCS density profiles.These are the three "Stack" entries in Table 1.The details of the stacking procedure are explained in Sect.2.3.
All structural parameters of the galaxies in Table 1 were taken from Su et al. (2021).They are based on GALFIT profile fitting (Peng et al. 2002), using Sérsic functions, to photomet-ric g ′ -band data of the Fornax Deep Survey (FDS) (Iodice et al. 2016;Venhola et al. 2018).Stellar masses also come from that work.They were derived from empirical relations between colors and stellar mass-to-light ratio.We use archival photometric and spectroscopic catalogs of GC candidates for investigating their spatial distribution.Here follows a brief description of the datasets investigated in our work.
ACS Fornax cluster survey data
The ACS Fornax cluster survey (ACSFCS), taken using the Advanced Camera for Surveys (ACS) of the Hubble Space Telescope (HST), imaged 43 early-type galaxies of the Fornax cluster.Full details of the ACSFCS, scientific motivations and data reduction techniques, are given in Jordán et al. (2007).Each galaxy in ACSFCS was imaged in the F475W (g) and F850LP (z) bands.For studying the GCs, each image was sufficiently deep such that 90% of the GCs within the ACS FOV can be detected.The selection and identification of bonafide GCs of the ACS-FCS galaxies are performed in the size-magnitude plane (Jordán et al. 2015).The resulting catalog of GCs in the ACSFCS provides the probability of an object being a GC, denoted by Pgcs, where Pgcs is a function of half-light radius, apparent magnitude and local background.For our analysis, we selected GCs with Pgcs larger than 0.5.This leaves the faintest GC candidates of m g = 26.3mag.To separate the GCs into red and blue subpopulations, we adopted a dividing g − z color of 1.1 mag (Fahrion et al. 2020).
Spectroscopically confirmed GCs
We studied the spectroscopically confirmed sample of GCs of the Fornax cluster from the recent catalog produced by Chaturvedi et al. (2022).Reanalyzing data of the Fornax clus-ter taken using the Visible Multi-Object Spectrograph (VIMOS) at the Very Large Telescope (VLT) (Pota et al. 2018) and adding literature work, they have produced the most extensive GC radial velocity catalog of the Fornax cluster (see Chaturvedi et al. 2022, for details), comprising more than 2300 confirmed GCs.The faintest GC has m g = 24.2mag.They used a Gaussian modeling mixture technique to divide the GC population into red and blue GCs, with g − i ∼ 1.0 mag as separating color, which we adopt in our analysis.
Fornax Deep Survey data
The Fornax Deep Survey (FDS) is a joint project based on a guaranteed time observation of the FOCUS (P.I.R. Peletier) and VE-GAS (P.I.E. Iodice, Capaccioli et al. 2015) surveys.It consists of deep multiband (u, g, r and i ) imaging data from OmegaCAM at the VST (Kuijken 2011;Schipani et al. 2012) and covers an area of 30 square degrees out to the virial radius of the Fornax cluster.
We applied morphological and photometric selection criteria to the photometric compact sources catalog of the FDS (Cantiello et al. 2020) to decrease the fraction of contaminant objects that are not GCs.The criteria on the colors of the selected objects g − i > 0.5 and g − i < 1.6 were chosen according to the colors of spectroscopically confirmed GCs around the central cluster galaxy NGC 1399.Further criteria were inspired by cross-matching the FDS and ACS catalogs of GCs of NGC 1399.They were chosen such that we do not exclude too many real GCs and, at the same time, exclude as many contaminants as possible.We found a good balance when using the following criteria: CLASS_STAR>0.031,m g > 20 mag, and Elongation<3.The meaning of the parameters is explained in Cantiello et al. (2020).After applying the selection criteria, the faintest GC candidate in this calatog has m g = 27.0 mag.
It turned out that the FDS catalog, after applying the GC candidate selection criteria, shows systematic variations of the surface density of sources that have a tile-like pattern, see Fig. 1.The tiles corresponds to the OmegaCAM imaging tiles of the FDS survey.They probably result from varying observing conditions, like seeing, sky transparency, etc.When fitting the surface density profiles of GCSs of individual galaxies, we had to be careful that the tile borders do not introduce any kinks in the profiles.We did this first by visual inspection of plots of positions of the GC candidates in the wide neighborhood of the investigated galaxies and second by visual inspection of the plots of the radial profiles of density of the GC candidates around the target galaxies.
Stacking of faint galaxies
As mentioned above, we decided to stack the faint galaxies of similar stellar mass in order to have enough sources to extract the density profiles of their GCSs.We used only the FDS data for this, since most faint galaxies were not covered by the ACS-FCS.If the hypothesis of this paper, that the break radii coincide with the acceleration radii, is correct, then the break radii of all GCSs stacked in this way should be roughly equal in each mass bin (an acceleration radius nevertheless depends also on the particular distribution of mass in the given galaxy).We then treated the stacks as single galaxies.We created three such artificial objects.Their logarithmic masses were centered on the values of 8.0, 8.5 and 9.0 M ⊙ .The widths of all logarithmic mass bins were 0.5 M ⊙ .Stacking of galaxies of even lower masses did not provide sufficiently clear profiles of the projected density of GCs.
We stacked all galaxies from the Su et al. ( 2021) catalog that met the following criteria.First we excluded spiral galaxies, because our data do not allow to distinguish between GCs and star forming clumps or young star clusters in their disks.Early-type galaxies were identified by requiring their asymmetry parameter, stated in Su et al. (2021), to be lower than 0.06.We further excluded galaxies that are located close to the borders of the tiles of the FDS mosaic, and galaxies whose GCSs surface density profiles might be affected by interloping GCSs of other nearby galaxies.These were identified by visual inspection of the positions of the GC candidates in the neighborhood of the galaxies being stacked.The list of stacked objects in every mass bin is stated in Table A.1 in Appendix A.
We assigned to every stacked galaxy a stellar mass and Sérsic parameters.These were obtained as the median values for all galaxies included in the corresponding stack.
Fitting the surface density profiles of the GCSs by analytic functions
Here we describe how we determined for the investigated galaxies the parameters of the GCSs density profiles.
Extracting the observed profiles of surface density
For a given galaxy, we divided the GC candidates in radial bins according to their galactocentric distance, and for each bin, we calculated the surface density of sources in it.The bins had the shape of circular annuli, that is we ignored the possible ellipticity of the GCSs.We demonstrate in Appendix C that this simplification has no appreciable impact on the derived profile of the GCS density.We chose the widths of the radial bins such that was constant in each bin.Here n stands for the total number of sources falling in the annulus, A the area of the annulus and γ an estimate of the surface density of contaminants.The number N is thus the estimate of the number of GCs in the aperture.The parameter N was chosen to be big enough so that the profile did not appear too noisy (as judged by eye) and small enough so that the two straight parts of the broken power law profile were resolved by at least three data points.The purpose of subtracting the expected number of contaminants was to increase the signal to noise in the outer bins.This condition was used only for choosing the bin widths; the final profile parameters of the GCSs (including the contaminants level) were derived from fitting the surface density of all sources, that is n.
For the datasets that are not expected to contain many contaminants, that is the spectroscopic and ACS catalogs, we used simply γ = 0. Otherwise the value of γ was found iteratively in the following way.We first chose a low value of γ and constructed the observed surface density profile.That was then fitted by some of the analytic model profiles described below.One of the parameters of the fitted profile was the density of the background sources, γ.In the next iteration we used a value of γ that lied between the value of γ used in the previous iteration and the fitted value of γ.We had to stop increasing γ at some point, because when it is too large, the extracted surface density profile would be truncated or distorted, because the expression on the right-hand-side of the condition Eq. 3 would be lower than the assigned value of N at large radii.In some cases, the final γ was chosen such that the extracted profile did not show any small bumps arising from small clusters of contaminants (e.g., distant galaxy clusters in the background).We note that if the definitive γ was chosen to be somewhat below the true value, the resulting extracted profile would not be affected substantially; only the bins would not have the optimal widths.
The contamination by the light of the host galaxy makes it difficult to detect GCs near its center.In order to detect them, we have to subtract a model of the light of the galaxy from the image.The model is typically not perfect, and the fit residuals in the difference image still complicate the detection of the GCs.Furthermore, the light of the galaxy introduces photon noise, that decreases the signal-to-noise ratio of the GCs.Fainter GCs are affected more.This can deform the observed GCS density profile.The ground-based FDS data are more sensitive to these problems than the HST data: the GCs in HST images appear sharper because of no atmospeheric seeing effects, and therefore reach a higher signal-to-noise ratio.We indeed found the signature of this in our data: the inner slopes of the profiles of surface density of GC candidates were shallower in the FDS data than in the HST data.Examples can be seen in Fig. A.14 (NGC 1427) or Fig. A.8 (NGC 1380B).
We were using two strategies to mitigate the flattening of the inner GC density profile because of the problems with the contamination by the light of the host galaxy.If there were HST data available in the central region, we adopted the inner slope a derived from these data.For galaxies without HST data, we constructed the GCS surface density profiles only from brightenough sources in FDS.We determined the magnitude cut using a plot magnitude -projected galactocentric distance of the sources.The limiting magnitude was set as that of the faintest sources that were still detectable in the very center of the galaxy.This approach has the downside that it reduces substantially the number of GC candidates, such that it becomes harder to trace the GC density profile, particularly at large galactocentric distances.In Appendix B we demonstrate on the example of NGC 1399 that the break radius does not depend significantly on the magnitude cut.
The surface density profiles of the GCSs were extracted and analyzed only within a restricted range of the projected galactocentric radius defined by the limits r min and r max .The upper limit, r max , was usually enforced either by the proximity to the GCSs of other galaxies, or because of the fluctuations of the density of contaminants, or because there was a border of tiles of the FDS mosaic.The lower limit, r min , was usually taken to be zero, but in some galaxies, it was used to reduce the problems with the contamination by the light of the host galaxy.The radii r min and r max were determined by a visual inspection of both the map of the GC candidates near the inspected galaxy and the surface density profile of the GC candidates.
When constructing the GCS surface density profiles for a galaxy, we had to exclude the areas occupied by the GCSs of neighboring galaxies.We excluded sources close to the following major galaxies (unless studying the GCSs of these galaxies themselves): ESO 358-33, NGC 1396, NGC 1317, NGC 1373, NGC 1374, NGC 1375, NGC 1379, NGC 1380, NGC 1381, NGC 1382, NGC 1386, NGC 1387, NGC 1389, NGC 1404, NGC 1427 and NGC 1427A.We marked all sources closer than 3 ′ to these galaxies as excluded.This distance was chosen by visual inspection of the map of positions of all sources.We note that we did not exclude the area occupied by the GCS of NGC 1399 because it is very extended.If we did so, it would not be possible to construct the observed surface density profiles of several smaller galaxies in the vicinity of NGC 1399.Instead, whenever reasonable, we just assumed that the GCs of NGC 1399 are distributed with a constant surface density in the vicinity of the investigated smaller galaxies, so that we could treat them as additional contaminants.This approach was not applied to NGC 1404, which has a rather extended GCS and is located very close in projection to NGC 1399.Due to the changing number density of NGC 1399 GCs across the area of NGC 1404, we could not consider them as uniformly distributed contaminants and we had to use another method, see Sect.3.1.1.
For a few galaxies, we had to mark as excluded sources in the regions of blemishes of the FDS survey.Most often these were regions around bright stars that appear as holes in the maps of the positions of sources from the FDS catalog.
In general, when calculating surface density in a given annulus, we did not consider sectors in which there was at least one excluded source.In total, the observed surface density of sources in a given annulus was calculated as where α stands for the sum of angular extents of all excluded sectors, n for the number of sources in the not-excluded sectors, and A for the total area of the inspected annulus.Given that the number of sources in a given area follows the Poisson distribution, we estimated the uncertainty of the surface density of sources as (5)
The special case of NGC 1404
The galaxy NGC 1404 lies in projection close to the central cluster galaxy NGC 1399.Their GCSs overlap.The GCS of NGC 1399 has a strong gradient of surface density at the position of NGC 1404.The method described above would not allow us to produce a reliable GC density profile of this galaxy.Instead, we use the catalog of spectroscopic GCs by Chaturvedi et al. (2022) for NGC 1404 and assume that it covers the galaxy homogeneously.We could then make use of not only the information about the spatial positions of the sources, but also the information about their radial velocities, since the radial velocity of NGC 1404 is ∼520 km s −1 larger than that of NGC 1399.We first applied spatial criteria on the sources to be used for constructing the observed profile of the GCS.They are depicted in the upperright panel of Fig. A.13.We avoided regions that are closer than 8 ′ to NGC 1399.We also avoided the region that is closer than 10 ′ to the point with the J2000 coordinates (54.86017, -35.75988), because this region seemed to suffer from geometrical incompleteness of the spectroscopic survey because the density of sources in this region was lower than in its surroundings.
Just as for all other galaxies, we applied a limit for the maximum distance of the used sources from the galaxy.We also applied a radial-velocity limit: all sources that had radial velocities lower than the center of NGC 1404, 1947 km s −1 , were excluded, as shown in the bottom-right panel of Fig. A.13.This helped us to reduce substantially the contamination by the GCs of NGC 1399.
While the observed number density profile was constructed from only 23 sources and there were only 2-3 sources per bin, the break in the profile is clearly visible and the break radius follows the same correlations as the break radii of the other galaxies.
Models of density and surface density profiles of GCSs
The extracted profiles of surface density of GCs candidates were fitted by one of the analytic functions described in this section.We made use of the fact that for a spherically symmetric GCS, the 3-dimensional density profile ρ corresponds to the projected surface density profile Σ given by the Abel transform: Here r stands for actual galactocentric distance and R for the projected galactocentric distance.
We considered several types of volume density profiles.The first was a pure broken power law given by Eq. 1.Its corresponding surface density profile is given by the equations where the symbol 2 F 1 denotes the Gaussian hypergeometric function, and u br = u(r br ).This form of surface density profile was used to fit only the datasets that contain a negligible fraction of contaminants, that is usually for the GC candidates coming from the ACS catalogs.
For the datasets that contain contaminants, we had to add one more free parameter in the model profile, γ, that expresses the surface density of contaminants: The contaminants were thus assumed to be distributed homogeneously, that is with no density gradient.This profile was mostly used for fitting of the profiles extracted from the FDS catalog.
For some galaxies, it was not possible to fit the surface density profile of GCS by a broken power law.This happened either when the GCS was so extended that the ACS field captured only the inner part of the broken power law profile, or the broken power law would be only poorly constrained because the galaxy had too few GCs.In these situations, we only made a fit by a simple power-law density profile.The density profile of a single power law was parametrized as which corresponds to the surface density profile When contaminants were expected to contribute to the surface density profile substantially, we added the background term to the single power law:
The fitting method
We found the best-fit parameters of the models by the maximum likelihood method.Here we assumed the uncertainty of the number of sources in each bin follows the Gaussian distribution.
Then the likelihood reads: Here p = (p 1 , p 2 , p 3 , . . ., p ν ) denotes the vector of the free parameters of the model, and r i the central radius of the i-th bin, that is the arithmetic average of the inner and outer radius of the given annulus, Σ m denotes the surface density predicted by the fitted model and Σ obs,i the observed surface density of GC candidates in the given bin.We used the following method to estimate the uncertainty of the fitted parameters.It is sometimes called the method of support.It is based on the statistical likelihood-ratio test.Let p max denote the vector of the best-fit free parameters and L max = L( p max ) the maximum value of the likelihood function.Then the upper (lower) limit on the value of the j-th free parameter, p j , was estimated by maximizing (minimizing) p j over the region of the parameter space satisfying the condition ln The examples in Appendix C and Sect.6.2.1 demonstrate on artificial data that our methods are able to recover correctly the intrinsic parameters of the density of the GCSs.
Resulting parameters of the fits
For every galaxy, we eventually obtained density profiles of their GCSs and their fits from up to three types of data (ACS, FDS, spectroscopic, see Sect. 2).All the fitted parameters are provided in all galaxies, we show the background-subtracted profiles -this means that the fitted value of the surface density of contaminant sources was subtracted from both the measured and fitted profiles.We also show profiles without background subtraction for a few galaxies, namely for NGC 1352 (Fig. A.5), NGC 1387 (Fig. A.11) andNGC 1427 (Fig. A.15).In all plots showing the background-subtracted FDS profiles we also indicated by the dashed horizontal lines the fitted values of the background.It was necessary to accept for every galaxy one final set of the parameters of the density profile of the GCS.We had to choose for every galaxy the values of the inner slope a, outer slope b and the break radius r br .We did not make any final choice of the surface density of contaminant sources and the normalization of the density profile of the GCS -they are strongly influenced by the method of observation and were not necessary for the subsequent analysis.
The way we selected the final set of parameters of the GCS profile density was different for different galaxies.We chose the parameters derived from the spectroscopic data only if no other data were available for the given galaxy, because spectroscopic data can easily be degraded by geometric incompleteness of the survey.Whenever possible, we based the final parameters on the FDS or ACS data.If data from both surveys were available for a given galaxy, the strategy of accepting the final parameters was decided on the basis of a visual inspection of the surface density profiles.If the inner slope, a, was different in the FDS and ACS data, we accounted the difference to the problems with contamination by the light of the host galaxy in the FDS data (see Sect. 3.1) and preferred the ACS value.For many galaxies, the ACS data did not fully cover the outer parts of the broken power law profiles.Therefore, if the outer slopes of the profile came out differently, we adopted b from the FDS data.Regarding the break radii, if the FDS profiles appeared affected by the contamination by the light of the host galaxy (i.e., the inner slope was shallower for the FDS data than for the ACS data), we preferred the break radius estimated from the ACS data.If the break was close to the outer limit of the ACS data, we preferred r br from the FDS data.
In the cases that a given parameter p j appeared consistent between the two datasets for a given galaxy, we combined the measurements using the following form of the weighed average.It can account for the fact that our estimates of the uncertainty limits on p j were asymmetric.Let ∆ + p j and ∆ − p j denote the upper and lower errorbars of the parameter p j , respectively.Next, let us introduce the "joint Gaussian distribution" which is subsequently used on several occasions in this paper.We approximated the probability distribution of p j by g(p j , p j , ∆ + p j , ∆ + p j ).Here p j stands for the best-fit value of p j .If g FDS denotes the probability distribution for the FDS data and g ACS that for the ACS data, then the function represents the total likelihood function of p j .We got the final estimate of p j as the argument p j,jointmax maximizing the function g tot (p j ).The uncertainty limits were obtained through the method of support, that is as the border values of the interval of p j meeting ln g tot (p j,jointmax ) − ln g tot (p j ) < 0.5.( 18) For some galaxies, the GCS break radius was close to the border of the ACS data and the inner reliable limit of the FDS data.For such galaxies, we had only the fits of the inner slope from ACS data and the outer slope from the FDS data.In such cases, we opted for a "manual" method when we estimated the break radii and their uncertainties by visual inspection of both profiles in the background-subtracted surface density plots.That was, for example, the case for the profile of the blue subsample of the sources around NGC 1336, as shown in Fig. A.4.In these cases, when we accepted this subjective method, we stated very conservative estimates of the uncertainty limits.
The final parameters are listed in Table 2.The notes indicate the data or joining method that was used to obtain the value: f indicates the FDS data, h the HST ACS data, s the spectroscopic data, a the weighed average, and m the manual joining method.There are also some values marked by an asterisk.They indicate suspicious measurements.Those were identified by visual inspections of the observed and fitted profiles of surface density in Appendix A. The measurements were usually deemed suspicious if one of the parts of the broken power law profile was resolved by only one or two data points.This corresponds to the galaxies that have few GCs.Next, we marked all parameters of the density profile of NGC 1404 suspicious, because this galaxy is known to be undergoing a tidal interaction with NGC 1399, their GCSs overlap and we used spectroscopic data to fit the profile, that can suffer from incomplete spatial coverage of the spectroscopic surveys.Finally, we deemed suspicious the fitted values of b for the two galaxies with the most extended GCS, that is NGC 1399 and NGC 1316, because they might have been affected by the large-scale sensitivity variations of the FDS survey (Sect.2.3).This is discussed in detail in Appendix D. It turned out that all of the suspicious measurements actually do not deviate noticeably from the scaling relations followed by the trusted measurements, as it will be shown in Sect. 5.
Calculation of a 0 radii
The a 0 radii were calculated on the basis of the Sérsic fits of the galaxies, and the estimates of their stellar masses (Su et al. 2021, see Sect. 2 for details).We were interested in two types of a 0 radii: those predicted by Newtonian gravity and those predicted by MOND gravity.Assuming that the galaxies are spherically symmetric, we could use the approximate analytic formulas for the density Sérsic spheres of Lima Neto et al. (1999) (as updated by Márquez et al. 2000) to calculate the profiles of Newtonian gravitational acceleration, g N (r).The profiles of MOND acceleration, g M , were obtained by transforming g N using the formula Eq. 2. We adopted the observationally motivated interpolation function ν: and value of a 0 = 1.2 × 10 −10 m s −2 (McGaugh et al. 2016;Li et al. 2018).Once the radial profiles of gravitational accelerations were known, we could find the a 0 radii numerically.Finding the a 0 radii for the stacked galaxies required a more elaborate approach.First, we calculated the acceleration profile for each of the stacked galaxies individually.We then assigned to each stack a final acceleration profile that was calculated as the median acceleration profile of all objects contributing to the stack.Then we could solve for the a 0 radii.
In the following, we denote by r a 0 ,N the Newtonian a 0 radius and by r a 0 ,M the MOND one.In the case that we do not need to
GC population
Table 5.Average and intrinsic scatter of the difference between structural parameters of the GCSs of the red and blue GC populations.
Parameter (red-blue) distinguish between them or the statements hold true for both, we denote the a 0 radius by r a 0 .
Uncertainties on the a 0 radii were derived only from the uncertainties of stellar masses.These were tabulated by Su et al. (2021) for several stellar mass bins.We used linear interpolation to obtain the uncertainties of the stellar mass of individual galaxies.We neglected the uncertainties in distance.The total line-of-sight distance scatter for the galaxies in the Fornax cluster is about 0.5 Mpc (Blakeslee et al. 2009), which at the assumed distance of the center of the cluster of 20 Mpc, would result in the difference of stellar mass of 0.02 dex.This is negligible compared to the uncertainty in masses caused by the uncertainty in the mass-to-light ratio (Table 1).
For NGC 1399 we considered including the contribution of the hot intergalactic gas to the gravitational field of the galaxy.This could potentially influence the a 0 radius of the galaxy.The profile of cumulative mass of the hot gas for this galaxy was presented in Paolillo et al. (2002) and Samurović & Danziger (2006).It turned out that including the gas mass has only very little effect on the position of the a 0 radii.At the position of the MOND a 0 radius the cumulative mass of hot gas is just ∼ 10 9 M ⊙ , which is negligible compared to the mass of the galaxy (see Table 1).For this reason, we neglected the contribution of the gas mass in the gravitational field when estimating the a 0 radii.
The resulting a 0 radii are tabulated in Table 1.Some galaxies at the low-mass end of our sample, do not have a 0 radii.These are low-surface-brightness galaxies inside which the gravitational acceleration does not exceed a 0 .Interestingly, the GCSs of such galaxies still can show broken power-law profiles as we see below.
Structural parameters of GCSs for different GC subpopulations
We explored how the fitted values of the structural parameters of GCSs, a, b, a − b and r br , differ for the total, red and blue GC populations.Inspection of plots of these parameters against the stellar masses of the galaxies revealed that the suspicious measurements identified in Sect.3.4 often lied far from the reliable measurements, with the exception of the break radii.Therefore, all suspicious measurements were excluded from the subsequent analysis in this section.
For each structural parameter, we counted the number of galaxies for which the parameter is greater for the red subpopulation of GCs than for the blue subpopulation, the number of galaxies for which the situation is opposite, and the number of galaxies for which the parameter is consistent for the blue and red GC populations.The consistency means that the uncertainty intervals of the measurements overlap.The results are listed in Table 3.Given that we do not have all parameters for all galaxies, the last column of this table indicates for each structural parameter the total number of galaxies that this comparison is based on.
The inner slope of the GCS density profile, a, shows the most variate behavior.It can be both smaller or greater for the red GCs than for the blue GCs, or the two GCs populations can have it consistent.All these cases occur approximately in the same number of galaxies.This result could have been influenced by the problems with the contamination by the light of the host galaxies.The b parameter is usually consistent for the red and blue GC populations, but in some cases the parameter is higher for the blue GCs.The difference of the slopes, a−b, is consistent for the red and blue GCs for all but one galaxy, for which a − b is greater for the red GCs (i.e., the break is more pronounced).The values of r br are in a roughly equal number of galaxies either consistent for the red and blue GC populations or they are lower for the blue population.
Next, we estimated the mean and intrinsic scatter of the structural parameters for all GC populations.We assumed that the intrinsic distribution of every parameter p j is Gaussian: where p j is the mean and σ int, j the intrinsic scatter of the distribution.We took into account the asymmetric uncertainty intervals of the structural parameters.The probability distribution of the parameter p j of the i-th galaxy, p j,i , was modeled as g mes, j,i (p j,i , p j,i , σ + p j,i , σ − p j,i ), where the function g was defined by Eq. 16.The best-fit values of p j and σ int, j were found by maximizing the likelihood function: where the symbol * denotes convolution.The uncertainty limits of p j and σ int, j were found through the method of support.In order to avoid numerical difficulties, we required the intrinsic scatter to be at least 0.01.The results are stated in Table 4. First, the table reveals the typical values of the structural parameters: a = −1.7,b = −3.4 and a − b = 1.7.The parameters a, b and their difference a − b do not differ substantially between all GCs, red GCs and blue GCs.The values are consistent within 1 σ.The intrinsic scatters of the parameters are consistent with each other for the different GC populations too.It is worth noting that the intrinsic scatter of the prominence of the break, that is a − b, is also consistent with being zero for the total and blue GC populations.This means that the intrinsic scatter is smaller than the measurement uncertainties of the individual data points.
We also inspected the statistical distribution of the differences of the structural parameters for red and blue GCs in individual galaxies, that is p j,i,red − p j,i,blue .We derived the mean and intrinsic scatter as before.The results are summarized in Table 5.It confirms that the structural parameters of the red and blue GC populations are in average the same within 1-2 σ uncertainty limits.The outer slope b is at the border of the 2 σ uncertainty limit, which suggests that the outer slope might be systematically steeper (i.e., more negative) for the red GCs.The break radii for the red and blue GC populations seem to be remarkably consistent, differing typically just by 0.3 kpc.In summary, we found at most marginal evidence that the structural parameters of the GCSs would depend on the color of the GCs.Following the rule of Occam's razor, we consider hereafter no difference between the structural parameters of the GC subpopulations.
Correlations of the structural parameters of GCSs
We were interested in how the fitted structural parameters of GCSs, a, b, a − b and r br , correlate with the stellar mass, effective radius, Sérsic index and the Newtonian and MOND a 0 radii of the host galaxy.We exploited that through Spearman's correlation coefficient and its p-value.The p-value expresses the probability that the two quantities under consideration actually do not correlate and the observed amount of correlation is there because of a coincidence.We comment here only on the pairs of quantities which correlate at least at the 5% confidence level (i.e., their p ≤ 0.05).
We found that the parameter b correlates significantly with both Newtonian and MOND a 0 radii.The correlations are shown in Fig. 2 It is noteworthy that the parameter b correlates better with the a 0 radii than with the galaxy stellar mass.The p-value of the correlation with the latter is 0.03, after removing of the galaxies for which the a 0 radii do not exist.In contrast, the p-value of the correlation of b with the a 0 radii is 0.008 (0.009) for the Newtonian (MOND) case.This suggests that there is a connection between the distribution of stars in the galaxy and the distribution of GCs beyond the break radius, which typically exceeds the effective radius of the galaxy in our sample twice.
The break radii correlate significantly with all the considered characteristics of the galaxies, as shown in Fig. 3.We note, however, that only the correlation with the a 0 radii is close to a one-to-one relation, as we describe in more detail in Sect.5.3, in agreement with the finding of BSR19.The mean of the ratio of the break radius and the effective radius r br /R e = 2.4 with a root-mean-square scatter of 0.7.The ratios r br /r a 0 are discussed in Sect.5.3.
We also explored whether the structural parameters a, b, a−b and r br correlate with each other.We found only two significant correlations, that are not surprising in the light of what has been said above.The break radius correlates with the b parameter, as it can be expected because we already found above that b correlates with the a 0 radius, which in turn correlates with the break radius.Next, we found that the difference a − b correlates with b.This is again not too surprising given that the absolute value of b is typically larger than the absolute value of a (Table 4).
Equality of break radii and a 0 radii
Here we come to the main part of the paper -the comparison of the break radii of GCSs with the a 0 radii of their host galaxies.The work of BSR19 pointed out their approximate match, nevertheless their sample contained only galaxies with a relatively narrow range of stellar masses and therefore also of the a 0 radii.Moreover, they investigated only the total populations of GCs, such that the shapes of profiles of the red and blue GCSs, including the break radii, remained unclear.The sample investigated here rectifies these insufficiencies.
The break and a 0 radii are compared in Fig. 4. The top row corresponds to the a 0 radii calculated from the MOND gravity, and the bottom row to those calculated assuming Newtonian gravity.The three columns, from left to right, correspond to the population of all, red and blue GCs, respectively.The squares represent the data from the sample from the present paper, while the circles those from BSR19.The open squares indicate the suspicious measurements.The diagonal black dashed lines indicate the one-to-one relation.The break radii of the galaxies that do not have a 0 radii are shown by the vertical dashed lines.In these galaxies, the acceleration generated by their stars, either as predicted by the Newtonian or MOND gravity, is less than a 0 everywhere in their extents.The galaxy NGC 1399 is plotted twice in each panel -one measurement comes from the data analyzed in this paper, the other comes from the paper BSR19, where it was derived from spectroscopic data.The match of the break and a 0 radii for NGC 1399 became worse in this paper (see Sect. 6 for more details).
The figure demonstrates a good match of the a 0 radii and break radii.The match is better for the a 0 radii calculated from MOND.This holds true regardless of the separation of the GCs in red and blue subpopulations.It should be pointed out that an exact match cannot be expected because of the tidal interactions between the galaxies and galaxy mergers in the dense environment of a galaxy cluster.The tidal interactions can even lead to a loss of GCs from the galaxies or to a transfer of GCs from one galaxy to another (Bekki et al. 2003).Our data indeed provide an observational indication that galaxy interactions influence the break radius, see Sect.6.3.
Figure 5 gives us an alternative view of the same data.It shows the ratios of the break and a 0 radii plotted against the break radii.The estimates of the expected values and uncertainty limits of the ratios were based on the fact that the distribution h z (z) of the variable z, that is a ratio of two independent variables x and y, z = x/y, can be calculated from the known distribution functions of x, h x (x) and of y, h y (y) using the so-called ratio distribution formula: The distributions of the break and a 0 radii for every individual galaxy were again approximated by Eq. 16.We estimated the expected value and the 1 σ uncertainty limits of r br /r a 0 as the 18, 50 and 84-th percentiles of the ratio distribution.The top row of Fig. 5 refers to the a 0 radii calculated in the MOND way, the bottom row to those calculated in the Newtonian way.The points that lie at the top border were shifted downward because they lie out of the displayed radial range, but their error bars are displayed correctly.We could make several observations from these plots.1) The break and MOND a 0 radii agree with each other within a factor of two for most galaxies -either in terms of the most likely values or within 1-2 uncertainty limits.This applies to all GC populations.2) For the total GC population, there is a hint of a correlation of the ratio r br /r a 0 with r br if the break radius is greater than about 20 kpc.In this region, however, most data points come from the spectroscopic data analyzed in BSR19, which could be biased by systematic errors.
3) For the blue GC population, there is a correlation of the ratio r br /r a 0 with r br for the whole galaxy sample.4) The Newtonian ratios r br /r a 0 seem to be systematically more offset from unity than the MOND ratios.We inspect the points 1) and 4) more in detail below and the points 2) and 3) in Appendix E.
Let us denote ζ = r br /r a 0 .We aimed to fit the distribution of log 10 ζ by a normal distribution with the mean of log 10 µ and a
GCS break radius [kpc]
Fig. 4. Relation of the a 0 radius versus the break radius of the GCS for each galaxy.The upper row corresponds to the MOND a 0 radius, the bottom row to the Newtonian a 0 radius.The three columns in the horizontal direction correspond to the whole, red, and blue GC (sub)population, respectively.The squares in the first column mark the galaxy sample inspected in this paper, the circles mark those from BSR19.The empty squares denote the suspicious measurements.The diagonal dashed black lines represent the one-to-one relation.The vertical gray dashed lines mark the galaxies that do not have a 0 radii.scatter of σ int .Specifically, we wanted to fit the distribution of ζ by the lognormal distribution: The best-fit parameters and their uncertainties were found through a likelihood function analogous to Eq. 21.This was applied to the sets of the total population and the red and blue subpopulations of the Fornax cluster galaxy sample investigated here.In addition, we created a union sample of the galaxy set of BSR19, with NGC 1399 excluded, and the sample of total GC populations of the Fornax galaxies.This was repeated for the a 0 radii calculated in the Newtonian and MOND ways.The results are listed in Table 6.
The table indeed shows that the MOND r br /r a 0 ratios are indeed close to one for all GC (sub)populations.The intrinsic scatter is around 0.28 dex (i.e., the factor of 1.9).The Newtonian r br /r a 0 ratios are somewhat offset from unity, with a mean of about 1.4.The intrinsic scatter is similar to the MOND case.
In Appendix E, we fit the relation of the break radius and a 0 radius by a linear function with a lognormal intrinsic scatter.The fitted value of the slope is not consistent with one.Nevertheless, it is possible to argue that the deviation of the slope from one is caused by a few outliers.The data inspected in this work thus Fig. 5. Demonstration of the approximate equality of the break radii of the GCSs and the a 0 radii of their host galaxies.First row: The ratio of the break radius and the MOND a 0 radius as a function of the break radius.The vertical axis is in the base-2 logarithmic scale, while the horizontal axis in the base-10 logarithmic scale.The horizontal line indicates the ratio of one (the break and a 0 radii are equal).Second row: The same as the first row but for the a 0 radii calculated for the Newtonian gravity.Columns: From left to right, the rows show the data derived for all GCs and for the red and blue subpopulations of GCs.Table 6.Results of the fitting of the distributions of the ratios r br /r a 0 by a lognormal distribution for different GC datasets.indicate just the fact that the break and a 0 radii agree within a factor of two.
The break in the GCS of NGC 1399 under scrutiny
Here we explore in detail the profile of the GCS of NGC 1399, the central galaxy of the Fornax Cluster.Apart from that, we chose this galaxy because is has the richest GC system in our sample, and we moreover have multiple datasets for it (FDS, ACS and spectroscopic data).Its GC density profile has been fitted by a broken power law already in BSR19.
Comparison of profiles extracted from different datasets
We analyzed the positions of GCs for this galaxy from three sources: the photometric catalogs of FDS and ACS and the spectroscopic data of Chaturvedi et al. (2022) and Fahrion et al. (2020).In addition, even another spectroscopic dataset (Schuberth et al. 2010) was fitted by a broken power law in BSR19 (and without the uncertainty limits already in Bílek et al. 2019b).
The comparison of all the profiles is shown in Fig. A.12.The inner slope of the broken power law is virtually the same for all the datasets analyzed here.This suggests that the presence of NGC 1399 in the center of the GC system did not affect our ability to detect the GCs.This might be counterintuitive, given that this is the brightest galaxy analyzed here.This is probably because the relation between magnitude and surface brightness of elliptical galaxies shows a peak at intermediate magnitudes (Graham & Guzmán 2003).The match of the inner slope of the profile derived from the spectroscopic GCs of Chaturvedi et al. (2022) and Fahrion et al. (2020) with the photometric samples indicates that it has a good spatial coverage near the center of the galaxy.
The outer slope and the break radius of the spectroscopic data do not agree with the FDS data that well.While the outer slopes for the total population of GCs agree within the 2 σ uncertainty limits, the break radii do not (Table A.1).This might indicate either issues with a geometric incompleteness of the spectroscopic sample or an imprecise estimation of the surface density of contaminant sources when extracting the profile from the photometric data.Indeed, the GCS of NGC 1399 is extended and the extraction of the data might have be affected by the variations of the sensitivity of OmegaCAM near the edges of its FOV (Sect.2.3).The outer slope of the FDS profile is rather bumpy, whereas that of the spectroscopic sample looks cleaner.This is probably because of contaminants in the FDS sample, such as background galaxy clusters.These would be excluded from the spectrocopic sample on the basis of radial velocity.The profile analyzed in BSR19 deviates most from the other profiles.We attribute this to a geometric incompleteness of the survey (see Fig. 1 in Schuberth et al. 2010).The position of the break radius is however not affected too much: BSR19 found 7.3 +2 ′ −0.4 , while we accepted here 11.0 ′ ± 0.6 (Table 2).The match of the break radius with the a 0 radius, located at about 2.4 ′ , got worse in this paper.We note that the surface density profile of the GCs from the catalog of Schuberth et al. (2010) was fitted by a broken power law already in Samurović & Danziger (2006).Their results are in good agreement with those of BSR19.
Dependence of the profile parameters on radial velocity cuts
The large number of GCs around NGC 1399 allowed us to explore how the radial profile of the density of the GCs depends on the line-of-sight velocity of the GCs with respect to the center of the galaxy.We assumed a radial velocity of NGC 1399 to be 1424.9km s −1 .The profiles for different velocity cuts are dis-played in Fig. 6.The fitted values of the parameters of the broken power-law profiles are presented in Table A.3.
The figure shows that the break radius shifts toward the center of the galaxy for the GCs that have larger velocities with respect to the galaxy.The outer slope, b, remains constant.The inner slope, a, becomes steeper for GCs having lower radial velocities with respect to the galaxy center.As a consequence, the GC density profile is almost a simple, that is unbroken, power law for the lowest radial velocity bin.
Attempt for explanation
As the first step toward the understanding of this observation, we constructed simple dynamical models of the GCS of NGC 1399 and explored how the profile of GC density depends on the chosen radial velocity bin.To this end, we solved the spherical Jeans equation, which gave us the profile of velocity dispersion of the GCS, see Bílek et al. (2019b) for details.We assumed the dark matter halo and the stellar mass-to-light ratio that were derived in that paper from the best-fit isotropic Jeans model of the GCS of this galaxy.For our present model, we assumed the same broken-power-law profile of the density of the GC system as in Bílek et al. (2019b).We solved the Jeans equation for three different choices of the anisotropy parameter: β = 0 (isotropic), β = 1 (purely radial) and β = −3 (highly tangential).The β parameter specifies the typical shapes of orbits of the GCs around the galaxy.Having solved the Jeans equation, we generated a three-dimensional model of the GC system.Each GC was randomly assigned a position and velocity according to the assumptions described above.The velocities were drawn from an ellipsoidal Gaussian distribution, according to the solution of the Jeans equation and the assumed anisotropy parameter.There were 10 4 GCs in each model.We eventually created a catalog of the projected positions and line-of-sight velocities of the artificial GCs.That was analyzed like the real data.
The results are shown in Fig. 7.For the isotropic and radial models, the position of the break indeed depends on the radial velocity of the GCs with respect to the galaxy, such that the GCs that are slower have their break radius further away from the galaxy, in agreement with the observed data.The tangential model does not show any obvious dependence of the break radius on the GC velocity range.This hints at a radial or isotropic anisotropy of the real GCS of NGC 1399.Next, the slopes of the broken power law depend on the inspected range of radial velocities of GCs in all models.The difference is most pronounced for the inner slopes of the radial and tangential models.The inner slopes of the radial model become more shallow toward the low radial velocities of GCs.The trend is the opposite for the tangential model.The tangential model thus resembles the real NGC 1399 in this regard.
We explored many other values of the anisotropy parameters, allowing them even to be a function of the galactocentric distance.We were never able to fully reproduce the trends observed in NGC 1399.Most notably, we never got close to equalizing the inner and outer slopes of the GC density profile that is observed for the GCs that have low radial velocities with respect to the center of NGC 1399.It might be necessary to model the GC system by several GC populations with different spatial distributions and anisotropy parameters.They would correspond to GCs formed in situ and accreted from possibly several galaxies.This more realistic modeling is beyond the scope of this paper.It should also be pointed out that in all of our models, including the non-isotropic ones, we used a gravitational potential that was derived assuming an isotropic and spherical GCS.
Tangential
Jeans model ( = 3) Fig. 7. Same as in Fig. 6, but for modeled data.The models are Jeans spherical models with the indicated types of anisotropy parameters.The models were based on the gravitational potential of NGC 1399 derived in Bílek et al. (2019b) and the parameters of the GCS used in that work.The vertical dashed line at 30 ′ is an eye-guide to facilitate the reading of the variations of the break radius.As a side note, the values of the parameters recovered from the projected positions of the artificial GCs agreed, within the uncertainty limits with the parameters that were used to generate the three-dimensional positions of the GCs.This demonstrates the correctness of our methods.
Dependence of the break radius on azimuth: elongation toward the interacting neighbor NGC 1404
We inspected also how the break radius varies in azimuth in NGC 1399.We defined eight sectors of the same angular widths centered on the galaxy.The first sector was centered on the western photometric major semi-axis of NGC 1399.This semi-axis points to the west almost exactly.We initially experimented with the GC candidates selected from the FDS data for this exercise, but it turned out that the profiles contained spikes.By inspecting images of the sky from the Digitized Sky Survey at the position of the spikes in the Aladin software (Bonnarel et al. 2000;Boch & Fernique 2014), it seemed that the spikes were caused by clusters of galaxies in the background, that is by contaminating sources.This is why we eventually decided to utilize the spectroscopic data.This also is the reason why we could not inspect in detail the second largest GCS in the Fornax cluster that is possessed by NGC 1316.
The fitted parameters of the individual sectors are presented in Table A.2.The found break radii are plotted as a function of azimuth in Fig. 8.The azimuth 0 • coincides with the photometric western major semi-axis of NGC 1399; the azimuth 90 • coincides with the northern minor semi-axis.We can see that the break radii form approximately an ellipse.It is interesting that the ellipse is inclined substantially with respect to the stellar body of the galaxy.Instead, one of the major semi-axes of the ellipse formed by the break radii points toward the galaxy NGC 1404.It is well known that both galaxies are tidally interacting (Bekki et al. 2003;Sheardown et al. 2018).This suggests that galaxy interactions can influence the distribution of GCs, and thus the values of break radii.
Interpretation
BSR19 proposed several potential explanations for the existence of the breaks in the profiles of the densities of GCS and of the coincidence of the break radii with the a 0 radii.In the following subsections, we develop these ideas one step further and discuss them in the light of the new data.New ideas are added.Some of the interpretations are specific for a MOND universe, because in MOND, the constant a 0 enters naturally in many phenomena.It is in part because a 0 marks the boundary between the two regimes of the law of gravity or inertia.Some of our interpretations are applicable also in the ΛCDM cosmology.Some of the explanations are based on the validity of the radial ac-celeration relation, which is an empirical fact.In MOND, the radial acceleration is a trivial implication of the theory, while the ΛCDM cosmology still is finding its way toward its full explanation through the galaxy formation theory (e.g., Di Cintio & Lelli 2016;Santos-Santos et al. 2016;Navarro et al. 2017).
Consequence of two regimes of gravitational potential and of the accretion of GCs in mergers
Supposing Newtonian gravity, the gravitational field in the inner region of a high-surface-brightness galaxy is dominated by the contribution of baryons.On the other hand, far from the galaxy center the gravitational field is dominated by the contribution of the dark matter halo.Given that according to the radial acceleration significant amounts of dark matter are needed only beyond the a 0 radius, the strengths of the gravitational fields generated by the baryons and the dark matter halo have to be equal roughly at the a 0 radius.In the inner region of the galaxy, the gravitational potential is steep, since it can roughly be approximated by that of a point mass that represents the baryonic component.In the outer region, the potential is more shallow.According to the radial acceleration relation it can be approximated by a logarithmic potential.
A large fraction of the baryonic mass of massive galaxies is expected to be gained by accretion of smaller galaxies.These bring also their GCs into the system.There are multiple pieces of evidence that a large fraction of blue GCs gets into giant galaxies through the accretion of dwarf galaxies (Côté et al. 1998;Hilker et al. 1999).
The GCs that are brought in with the satellites are tidally stripped from the satellites preferentially when the satellites are close to their orbital pericenter with respect to the host.Various accreted satellites have various pericentric velocities.If the stripped GCs have a large enough velocity at the moment of stripping, they reach the outer shallow part of the gravitational potential of the host and spread over a large range of apocentric distances.In contrast, if the GCs have a low radial velocity with respect to the host when stripped, they reach in the apocenters of their orbits only the inner steep part of the gravitational potential.Therefore their apocentric distances span only a narrow range (illustrated in Fig. 2 of BSR19).A break in the profile of the GC system is then expected at the border between the steep an shallow part of the gravitational potential of the host, that is near the a 0 radius.
The same reasoning can be applied to MOND.The only difference is that the change of the slope of the gravitational potential in the inner and outer regions of the galaxy is not because of a dark matter halo, but because of the strong and weak field regimes of MOND.
If such an interpretation was true, then the breaks in the density of GCSs would be very useful for investigation of dark matter halos under the assumption of the ΛCDM cosmology.The breaks would mark the radius at which the gravitational field changes from the baryon-dominated to the dark-mater dominated regime.This would allow estimating the effective radius of the dark matter halos.It is already known that the masses of dark matter halos of galaxies can be estimated from the number of GCs that the galaxies have (Spitler & Forbes 2009;Harris et al. 2015).
It would be ideal to explore if this mechanism of formation of breaks in GCS density profiles actually works through simulations.This is beyond the scope of this paper.Here we instead resort to looking for observational signatures of this scenario.GCs initially on circular orbits Fig. 9. Impact of the increasing of the external field on the trajectories of GCs that were originally on circular orbits of different radii, assuming the MOND gravity.The horizontal line indicates the MOND a 0 radius.
For giant galaxies, most blue GCs are deemed to be accreted while many of the red GCs are deemed to be formed in situ (Côté et al. 1998;Hilker et al. 1999).Under this scenario of the origin of the breaks, we would thus expect that the breaks would be more pronounced for the blue GCs.Our data do not show this -the broken number density profiles of GCSs are observed for both red and blue subpopulations and the magnitude of the break, a−b, does not seem to be different for the two subpopulations.A quantification through simulations however remains desirable.
Next, the galaxy sample presented here contains, unlike the sample of BRS19, also relatively low-mass galaxies, with the masses of the Small Magellanic Cloud.Low-mass galaxies are expected to form mostly by in situ star formation, without a substantial growth through mergers.From this point of view, it is unexpected that we detected breaks also in the low-mass galaxies.On the other hand, the early-type galaxies of our sample, that are supported mainly by velocity dispersion, might have richer merging histories than the typical dwarf galaxies, which are rotating.It is known, however, that dwarf galaxies in cluster environments can transform from disky, rotating dwarfs to spheroidal, dispersion dominated dwarfs via tidal stirring (e.g., Mayer et al. 2001a,b).In total, we found tentative evidence against the origin of the breaks of GCSs through this mechanism.
MOND external field effect and stripping of dark halos
In MOND, the dynamics of an object follows the radial acceleration relation only if the object is isolated.If the object is located in an external gravitational field (for example a galaxy in a galaxy cluster), the apparent enhancement of gravity compared to the Newtonian gravity diminishes because of the nonlinearity of the theory.Once the strength of the external field surpasses the constant a 0 substantially, then the dynamics of the object behaves as in Newtonian dynamics without dark matter.This is called the external field effect (Milgrom 1983c;Bekenstein & Milgrom 1984).If interpreted in the Newtonian way, an object that is exerted to a gradually stronger external field behaves as it was losing its dark matter halo.Observational evidence for the external field effect has been reported (McGaugh & Milgrom 2013;McGaugh 2016;Caldwell et al. 2017;Chae et al. 2020), even if the galaxies in clusters might be an exception (Freundlich et al. 2022).
Galaxy clusters assemble by accreting individual galaxies from less dense environments.The external field effect then reduces the gravitational fields of the galaxies beyond their a 0 radii.At smaller radii, the gravitational field remains Newtonian, as it was when the galaxies were far from the cluster.Therefore, one expects that a break will develop at the a 0 radius.
We made a simple model to explore the impact of this process.The GCs were initiated as orbiting a point source with a mass of 10 10 M ⊙ on circular orbits and a zero external field was assumed.Then the model was evolved for 10 Gyr, increasing the magnitude of the external field linearly with time, such that the external field eventually reached the value of 2a 0 , a value typical for the cores of galaxy clusters (Milgrom 2008).The magnitude of the gravitational force in the presence of an external field was calculated using the so-called 1-D approximation of QUMOND (Famaey & McGaugh 2012).
The trajectories of the modeled GCs are shown in Fig. 9.The horizontal dotted black line indicates the MOND a 0 radius.While the GCs below the a 0 radius stay at their initial orbits, the distant GCs recede as the consequence of the external field effect.This causes a dilution of the GCS beyond the a 0 radius, such that the profile bends down.
It is possible to estimate the impact of the external field effect on the distribution of GCs analytically.If the gravitational potential of the host remains spherical when it is changing, then there is no tidal torque acting on a GC moving on a circular orbit.The angular momentum of the GC is then conserved.The angular momentum of a GC orbiting the host galaxy well beyond the a 0 radius when the galaxy is far from the cluster, is r 0 4 √ GMa 0 .Once the GC and its host appear in a strong external field, compared to a 0 , the gravitational field of the host becomes Newtonian and thus the angular momentum of a GC on a circular orbit becomes r 1 GM r 1 .From the conservation of angular momentum we get: If the Newtonian gravity is assumed, galaxies that enter galaxy clusters can reduce their gravitational fields by stripping of their dark matter halos in the consequence of tidal interactions with the other galaxies (Lee et al. 2018;Mitrašinović 2022).This can mimic the external field effect to a certain degree.
Even this mechanism of creating breaks in GCS density profiles through the reduction of the gravitational fields of galaxies is not perfect.The central galaxies of clusters, that spent their whole lives in the clusters, are not expected to reduce their gravitation fields.Yet NGC 1399 shows a break in the GCs density profile.Moreover, given that the intensity of the external field or tidal stripping is probably different for every galaxy, it is not clear why the galaxies would end up with a relatively narrow range of the external slope of the GCSs, b.Finally, some of the galaxies investigated in BSR19 were isolated (e.g., NGC 821 and NGC 3115), and thus never had an opportunity to develop a break through the external field effect or the tidal stripping of their halo.
Two regimes of dynamical friction
When a GC orbits its host galaxy, it attracts the stars, gas, or dark matter particles of the host galaxy gravitationally and gives them kinetic energy and angular momentum.As consequence of the laws of conservation of these quantities, the orbital angular momentum and energy of the GCs decrease.This is called dynamical friction.Dynamical friction manifests itself as a force acting on the GC against the direction of the velocity of the GC with respect to its environment.In Newtonian gravity, the magnitude of the dynamical friction force can be estimated by Chandrasehkar's formula (Chandrasekhar 1943): where m stands for the mass of the GC, v its velocity with respect to its local environment, ρ the density of the local environment and σ the velocity dispersion of the environment.The expression ln Λ is called the Coulomb logarithm.Its value depends on the exact configuration of the problem under consideration, but it is on the order of a few.A MOND analog of Chandrasekhar's formula was proposed by Sánchez-Salcedo et al. (2006) on the basis of heuristic arguments and theoretical results by Ciotti & Binney (2004): Here a denotes the gravitational acceleration exerted by the host galaxy on the GC.Sánchez-Salcedo's formula has recently been verified by simulations of GCs orbiting ultra-diffuse galaxies by Bílek et al. (2021).It is however supposed to work only in the weak-field regime of MOND, that is for a ≪ a 0 .If a ≫ a 0 , the dynamical friction force should reduce back to the one given by Chandrasekhar's formula.Therefore here we heuristically propose a universal formula for dynamical friction in MOND: where ξ(x) is a function satisfying the limit behavior: It was proposed in BSR19 that these two regimes of dynamical friction in MOND might give rise to the observed breaks in the GCS density profiles at the a 0 radii.For the purpose of the exercise below, we arbitrarily chose We explored if dynamical friction can influence the distribution of GCs noticeably through simplistic models of the dynamics of the GCs of NGC 1399 and NGC 1373.These two are representatives of the most and the least massive galaxies in our sample, respectively.We consider models assuming either Newtonian gravity with dark matter or the MOND gravity.
The distribution of stars of the galaxies was modeled by Sérsic spheres described by the parameters stated in Table 1.It also was important to include the circumgalactic media in our models given the appearance of ρ in Eq. 25.The break radii of the GCSs in our galaxy sample exceed 2-3 times the effective radii of the host galaxies and thus the contribution of the circumgalactic gas to the local baryonic density might be substantial.The masses of circumgalactic media are generally difficult to determine.They could possibly be comparable to the stellar masses of galaxies (Gupta et al. 2012;Werk et al. 2014;Keeney et al. 2017;Das et al. 2019).Here we modeled the distribution of gas around NGC 1399 according to the observed density of hot gas presented in Paolillo et al. (2002) in their Fig. 17.As central cluster galaxy, NGC 1399 is not expected to contain much cold gas (Yoon & Putman 2017;Burchett et al. 2018).The extended gaseous component of NGC 1373 was modeled by a Sérsic sphere whose mass and Sérsic index are the same as of the stellar component but the effective radius is four times larger.
For the Newtonian models, we assumed the NFW dark matter halo derived for NGC 1399 by Bílek et al. (2019b).It was deduced from an isotropic model of the kinematics of the GCS of the galaxy.For NGC 1373, we assumed a NFW dark matter halo of the virial mass of 10 11.35 M ⊙ and a scale radius 13.4 kpc according to the stellar-to-halo mass relation (Behroozi et al. 2013) and the halo mass-concentration relation (Diemer & Kravtsov 2015).
In the MOND models, it was necessary to consider the external field effect coming from the Fornax cluster.For the MOND model of NGC 1373, we included the external field effect such that the galaxy was assigned the Newtonian gravitational field produced solely by the stars of the galaxy.For NGC 1399, as for a central galaxy of the cluster, a zero external field was assumed.
The velocity dispersion that appears in Eq. 25 was obtained by solving the spherical Jeans equation with a zero anisotropy parameter.All material was approximated as consisting of a single population of collisionless particles.Bílek et al. (2021) found that for GCs on circular orbits, the value of the Coulomb poten-tial is around 10 and it is around 3 for radial orbits.In the following calculations, we assumed the value of 10.The GCs in our models had a mass of 10 5 M ⊙ .Their orbits were integrated for 10 Gyr.
For NGC 1399, we found that neither for Newtonian or MOND gravity dynamical friction is able to affect the orbits of GCs noticeably.We explored various orbital eccentricities of the GCs and different apocentric distances in the range between 10 and 200 kpc.Dynamical friction started to influence the orbit of the modeled GC noticeably only after the mass of the GC was increased to at least about 10 8−9 M ⊙ , depending on the orbit shape.Dynamical friction is thus rather expected to affect the spatial distribution of the satellite galaxies.With the mass of the GC of 10 5 M ⊙ , the situation did not change for the MOND model even if we multiplied the density of matter of the galaxy by a factor of 30 to model the hypothesized cluster baryonic dark matter (Milgrom 2008).
For NGC 1373, we explored various orbital eccentricities of the GCs and different apocentric distances in the range between 0.6 to 8 kpc.The friction came out to be the strongest near the center of the galaxy.Radial orbits were affected more than circular.The GCs that moved within about one effective radius of the galaxy eventually settled in the center of the galaxy, both in the Newtonian and MOND models.The central deficit of GCs was immediately balanced by sinking of the GCs that originally moved on bigger orbits.As result, generally no substantial breaks in the GCS density profiles were induced.We show in Fig. 10 an example of orbits that create a very strong break, compared to the other orbits we explored.The break is interestingly located near the a 0 radius.The figure shows the initial versus the final apocentric distances of GCs that move in the Newtonian model and were launched with velocities that are equal to 0.1 of the local circular velocity.The sinking of the GCs caused a mild bend of the density profile at around 2 kpc, which is where the break in the real galaxy is observed.The black dotted line indicates the one-to-one relation.Once the GCs were put on circular orbits, the break stopped being noticeable.In a real galaxy, where the GCs orbits have various eccentricities, the break induced by dynamical friction would thus be even milder than in Fig. 10.We verified that there is no change in these conclusions even for the intermediate-mass galaxy NGC 1379.
This demonstrates that dynamical friction on a GC is alone probably not responsible for the observed breaks in GCS density profiles.In low-mass galaxies, it is important for the dynamics of GCs within the stellar body of the galaxy, but the contribution to the formation of the breaks seems to be insignificant.
Change of mass or a 0
Galaxies can increase their baryonic mass, either by forming stars in situ or by accreting other galaxies.Early-type galaxies, that are investigated in this paper, are also suspected to loose their baryonic mass through baryonic outflows (Fan et al. 2008;Damjanov et al. 2009;Ragone-Figueroa & Granato 2011).Let us approximate the galaxy by a point source that has a mass of M initially and ϵ M after the mass change.According to the radial acceleration relation, the gravitational acceleration in the strong field regime will change by a factor of ϵ, while in the weak field regime by a factor of √ ϵ.Using the argument with the conservation of angular momentum from Sect.7.2, we found that a circular orbit with an initial radius of r 0 will change, after the mass change, to r 1 = r 0 /ϵ in the strong field regime and to r 1 = r 0 /ϵ 1/4 in the weak field regime.This suggests that the change of the baryonic mass could introduce only a different nor-malization of the GCS density profile inside and beyond the a 0 radius, but not the observed change of slope.On the other hand, GCs in GCSs move on trajectories with different eccentricities, which might possibly change the situation.In total, the change of mass does not seem to be responsible for the formation of the observed breaks, but simulations are necessary to exclude this option definitively.It was proposed that in a MOND universe, the value of a 0 could vary with time (Milgrom 1983c).Milgrom (2015) showed that the variation would affect the orbits of objects in a similar fashion as the change of mass described above, but the orbits would be expanding.Again, the above arguments can be used against this mechanism to be responsible for the formation of the observed breaks of density of GCSs.
Other influences shaping the density profile of a GCS
The radial profiles of number density of GCSs can be also shaped by other processes than those listed above.Here we mention some examples, even if it is not currently clear why they would introduce GCS density breaks near the a 0 radii.They should be considered in more complex models of the formation of the density profiles of GCSs.
Early-type galaxies are known to grow their effective radius with time (Daddi et al. 2005;Trujillo et al. 2006;van Dokkum et al. 2009), even if they do not form new stars or do not increase their average stellar masses substantially.A promising explanation of the growth of their radii are repeated minor mergers (Naab et al. 2009).The premerger potential energies of the accreted satellites are transformed into kinetic energy of the stars and dark matter particles of the merger remnant.It is plausible that GCs would absorb some of the energy of the satellites too.This would make the distribution of the GCs more extended, as already explored for major mergers in Bekki & Forbes (2006).
The distribution of the GCs can further be influenced by the tidal destruction of the GCs.Many GCs of the Milky Way are known to have stellar streams (e.g., Ibata et al. 2021).It is possible that the mechanisms listed above will direct some GCs on radial orbits and once the GCs appear close to the center of the galaxy, they would be destroyed by tidal forces (Brockamp et al. 2014).This process would cause the a decrease of the density of the GCS near its center.
The density profiles of dark halos derived from observations under the assumption of Newtonian dynamics tend to show central cores (Kormendy & Freeman 2004;Oh et al. 2008;Donato et al. 2009;Salucci et al. 2012).The proposed explanations include the baryonic feedback (e.g., Governato et al. 2010;Di Cintio et al. 2014).Given that the feedback influences the motion and distribution of the dark matter particles, it will most probably influence the distribution of GCs in the same way.This would explain the observed flattening of the GC density profile toward the center of the galaxy.It is however unclear whether the core would have the a 0 radius.A mere presence of the dark matter core, formed by any mechanism, could affect the distribution of the GCs in the way described in Sect.7.1.
Summary and conclusions
It was found in BSR19 that the number density profiles of GCSs follow broken power laws (Eq. 1) whose break radii coincide with the a 0 radii of their host galaxies.An a 0 radius is defined as the radius at which the acceleration generated by the baryons of the galaxy equals the galactic acceleration constant a 0 .It was shown in BSR19 that the a 0 radii coincide with the break radii better than other characteristic length scales of galaxies, such as the effective radii or the dark matter halo characteristic radii.The galaxy sample of BRS19 nevertheless spans only a relatively narrow range of baryonic masses of the galaxies, namely about one decade.They investigated the distribution of GCs only on the basis of spectroscopic catalogs that can suffer from geometrical incompleteness.They investigated only the total population of GCs.It was unclear, for example, whether the density profiles of the blue and red GC subpopulations follow the broken powerlaw as well and, if so, whether the break radii of the subpopulations are located at the a 0 radius.In the current contribution, we aimed to overcome these deficiencies.We analyzed two catalogs of photometric GC candidates in the Fornax cluster, one based on the ground-based Fornax Deep Survey data and the other on the ACS Fornax cluster data.The profiles of density of GCSs for the lowest-mass galaxies were derived from stacks of the GC candidates over many galaxies of a similar stellar mass.Additionally, we inspected a new spectroscopic catalog of GCs in the vicinity of the central galaxy of the cluster, NGC 1399.We were investigating only the GCSs of ETGs, since for LTGs, it is not possible to distinguish GCs from the numerous compact regions of star formation in the disks.The galaxy sample studied here spans logarithmic stellar masses from 8.0 to 11.4 M ⊙ .The fitted parameters of the GCS density profiles are listed in Table 2.
Our observational findings can be summarized as follows: 1. We were able to detect breaks in the GCS profiles of virtually all galaxies (Appendix A).The exceptions are only the galaxies that have too few GCs.The breaks were found in the entire GC population as well as in the blue and red GC subpopulations (see the figures in Appendix A). 2. In the cases when the outer part of the broken power-law profile was observed in both the FDS and ACS catalogs, the outer slopes, b, generally agreed well, just as the break radii (Appendix A).The inner slope in the FDS data was usually shallower than in the ACS data.We attribute this difference to the greater difficulty of detecting faint GCs near galactic centers in the ground-based data because of the contamination by the light of the host galaxy (Sect.3). 3. The break radii of the total GC population and the red and blue subpopulations are rather similar (Sect.5.1).There is a marginal tendency for the blue GCs to have systematically higher break radii than the red GCs at the 1.5σ confidence level, namely by 0.3 ± 0.2 kpc, on average.4. We calculated the a 0 radii in two ways: assuming the Newtonian gravity and the MONDian gravity, which provides a clear theoretical understanding of the existence of the acceleration scale a 0 .We found that the break and a 0 radii agree typically within a factor of two (Sect.5.3).The a 0 radii calculated from the MOND gravity are less offset from the break radii than the a 0 radii calculated from the Newtonian gravity. 5.The break radii also show significant correlations with the stellar masses of the galaxies, their effective radii, and Sérsic indices (Sect.5.2).None of these correlations is however close to the one-to-one relation.6. Gravitational fields of some galaxies are weaker than the constant a 0 in the whole extents of the galaxies (Sect.4).Such galaxies thus do not have a 0 radii.They still show broken power-law density profiles for their GCSs.7. The outer slope of the GCSs profiles, b, strongly correlates with the a 0 radii (both Newtonian and MOND, Sect.5.2).The correlations of b with the a 0 radii are more significant than the correlation of b with the stellar mass of the host galaxy.This suggests that the mechanism that sets the profile of the GCS is causally linked rather with the spatial distribution of mass in the galaxy than with its total stellar mass.8.The parameter b for the blue GCs is higher (i.e., the profile is less steep) than for the red GCs at the 2 σ confidence level (Sect.5.1).9. We inspected in more detail the galaxy with the highest number of GCs, which is NGC 1399, the central galaxy of the Fornax cluster.We used the new catalog of spectroscopic GCs (Chaturvedi et al. 2022).We divided the GCs into groups according to a similar absolute value of the radial velocity with respect to the center of the galaxy.The shape of the profile shows systematic trends with the mean velocity of the selected group (Sect.6.2, Fig. 6).10.We divided the spectroscopic GCs around NGC 1399 in azimuthal sectors centered on the galaxy and derived the radial profiles of the density of GCs in each of the sectors.When the break radii are plotted in polar coordinates according to the angle of the middle line of the sector, they form an ellipse in the plane of the sky (Sect.6.3, Fig. 8).The major axis of the ellipse points toward the neighboring galaxy NGC 1404.
It is well known that these two galaxies are undergoing a tidal interaction.This demonstrates that break radii are influenced by galaxy interactions.
BSR19 proposed several explanations for the approximate match of the break and a 0 radii.We explored those and a few others in more detail (Sect.7).We made use of simple models and observational arguments.None of them explains our findings completely satisfactory.More elaborate models and simulations are desirable.More data, ideally coming from a larger variety of environments and galaxy morphological types, could give us hints as to why the a 0 and break radii are similar.
Appendix D: Inhomogeneity of the sensitivity of the FDS survey
As we mentioned in Sect.2.3, the catalog of FDS GC candidates shows spatial inhomogeities that form a tile-like pattern, see Fig. 1.The pattern seems to be arising because of a varying sensitivity of the survey both between the individual tiles of the mosaic and inside of the individual tiles.We were not able to remove the tile pattern even if excluding all but the brightest sources.Here we investigate whether the sensitivity variations could affect our measurements of the GCS density profiles.We focused on the cases of the galaxy with the richest and most extended GCS, that is NGC 1399, because it would be affected most by the large-scale sensitivity variations.This galaxy is located close to the center of one of the tiles.
To this end, we constructed radial profiles of the density of sources in four tiles close to NGC 1399, centered approximately on the centers of the respective tiles.In particular, we used the two tiles adjacent from the west to the tile containing the galaxy, and the two tiles adjacent from the east, along the line of a constant declination, see Fig. 1.These tiles do not contain any galaxies with substantial GCSs.The profiles were extracted up to the distance of 30 ′ , which is the distance that was used for constructing the GCS profile of NGC 1399.The regions occupied by the GCSs of intermediate galaxies were masked in the way described in Sect.3.1 before extracting the profiles.The radial bins were chosen such that each contains 800 sources.The measured profiles are shown in Fig. D.1 as the thin jiggles lines.The figure also shows by the thicker lines linear fits to the extracted profiles.The figure shows that the sensitivity at the outermost radius is different by around 10% from the central sensitivity.The sensitivity can both increase and decrease with the distance from the center of a given tile.The central surface densities of sources can differ by several tens of percent between the individual tiles.
We further investigated quantitatively what is the effect of the sensitivity variations on our estimates of the parameters of the broken power-law profiles of the GCSs.We assumed that the sensitivity variations visible in Fig. 1 affect GCs and contaminating sources in the same way.As a first approximation, we assumed that the sensitivity is a linear function of the distance from the center of the tile.We thus took the measured profile of the source density of NGC 1399 Σ(R) and transformed it to: where R max = 30 ′ is the radius of the last bin measured for NGC 1399.The parameter v quantifies the magnitude of the variation of sensitivity.It is the ratio of the sensitivities in the center of the tile (which is the same as the center of NGC 1399) and at R max .The profile Σ 2 was then fitted by the projected broken power law, in the same way we did with the real data.
The results are presented in Table D.1.The table reveals that the position of the break radius is not very sensitive to this type of spatial variation of sensitivity, even for very strong variations of sensitivity.On the other hand, a realistic variation of the sensitivity of plus or minus 10% already has a substantial effect on the measured outer slope b.
This sheds doubt on the measured value of the slopes b for the two galaxies with the most extended GCSs, that is NGC 1399 and NGC 1316.This is why we marked in Table 2 the values of these parameters for these two galaxies as suspicious.The GCSs of the other galaxies in our sample are much smaller (Fig. 1 and the values of break radii in Table 2).The profiles of these galaxies would not be affected much.
Fig. 1 .
Fig. 1.Positions of GC candidates in the FDS catalog, after applying selection criteria.The coordinate system is centered on NGC 1399.North is up, and west is to the right.
Fig. 2 .
Fig. 2. Plots of the statistically significant correlations of the outer slope of the density profile of the GCSs b with the properties of their host galaxies.Top: The MOND a 0 radius.Bottom: The Newtonian a 0 radius.The open symbol indicates a suspicious measurements.The red numbers in the corners indicate the p-value of the Spearman's correlations.
Fig. 3 .
Fig. 3. Plots of the statistically significant correlations of the break radii of GCSs with the properties of their host galaxies.From top to bottom: Stellar mass, effective radius, Sérsic index, MOND a 0 radius, Newtonian a 0 radius.The open symbols indicate suspicious measurements.The red numbers in the corners indicate the p-value of the Spearman's correlations.
. The red numbers in the corners of the tiles of the figure indicate the p-value of the correlations.The open symbols indicate the suspicious measurements.
r
br /r a 0 ,M r br /r a 0 ,N Dataset Mean σ int [dex] Mean σ int[dex]
Fig. 6 .
Fig. 6.Radial profiles of surface density of GCs of NGC 1399 in several bins of radial velocity with respect to the center of the galaxy.
Fig. 8 .
Fig. 8. Break radii of the GCS of NGC 1399 in different sectors centered on the galaxy.The radial coordinate is in arcminutes.The red point indicates the position of NGC 1404.
Fig. 10 .
Fig.10.Effect of dynamical friction on the apocentric radii of GCs that initially move in apocenter with a velocity that is equal to 0.1 of the local circular velocity.
Fig. B.1.Demonstration that the break radius does not depend on the applied magnitude cut.The data for the galaxy NGC 1399 were used.
Fig. D.1.Variation of the density of sources with distance from the centers of four tiles of the FDS mosaic.These are the two tiles to the west of the tile containing NGC 1399 and the two tiles to the east.Table D.1.Simulation of the effect of the large-scale variations of the sensitivity of the FDS survey on the derived parameters of the GCS of NGC 1399.v ρ 0 a b r br γ 0.5 25 +2
Table 1 .
Parameters of the investigated galaxies.
-Notes.Column 1: Common name of the galaxy.Column 2: Designation of the object in the FDS.Column 3: Decadic logarithm of the stellar mass of the galaxy in solar units.Column 4: Effective radius of the galaxy.Column 5: Sérsic index of the galaxy.Column 6: MOND a 0 radius.Column 7: Newtonian a 0 radius.
Table 2 .
Final set of the fitted parameters of the volume density profiles (Eq. 1 or Eq.11) of the GCSs of the investigated galaxies.
Table A.1.In Appendix A in Figs.A.1-A.19 we show plots of the observed density profiles together with the fitted models.For
Table 3 .
Number of galaxies for which the given parameter is larger for the red GCs, for the blue GCs, or for which the values are consistent.The last column gives the total number of galaxies for which this comparison was possible.
Table 4 .
Statistics of parameters of GCSs density profiles. | 23,506 | 2023-08-16T00:00:00.000 | [
"Physics"
] |
Brillouin gain spectrum dependence on large strain in perfluorinated graded-index polymer optical fiber
We investigate the dependence of Brillouin gain spectra on large strain of > 20% in a perfluorinated graded-index polymer optical fiber, and prove, for the first time, that the dependence of Brillouin frequency shift (BFS) is highly non-monotonic. We predict that temperature sensors even with zero strain sensitivity can be implemented by use of this nonmonotonic nature. Meanwhile, the Stokes power decreases rapidly when the applied strain is > ~10%. This behavior seems to originate from the propagation loss dependence on large strain. By exploiting the Stokes power dependence, we can probably solve the problem of how to identify the applied strain, when the identification is difficult only by BFS because of its non-monotonic nature. ©2012 Optical Society of America OCIS codes: (160.5470) Polymers; (280.4788) Optical sensing and sensors; (290.5830) Scattering, Brillouin. References and links 1. T. Horiguchi and M. Tateda, “BOTDA–nondestructive measurement of single-mode optical fiber attenuation characteristics using Brillouin interaction: theory,” J. Lightwave Technol. 7(8), 1170–1176 (1989). 2. T. Kurashima, T. Horiguchi, H. Izumita, and M. Tateda, “Brillouin optical-fiber time domain reflectometry,” IEICE Trans. Commun. E76-B, 382–390 (1993). 3. D. Garus, K. Krebber, F. Schliep, and T. Gogolla, “Distributed sensing technique based on Brillouin optical-fiber frequency-domain analysis,” Opt. Lett. 21(17), 1402–1404 (1996). 4. K. Hotate and T. Hasegawa, “Measurement of Brillouin gain spectrum distribution along an optical fiber using a correlation-based technique – Proposal, experiment and simulation,” IEICE Trans. Electron. E83-C, 405–412 (2000). 5. Y. Mizuno, W. Zou, Z. He, and K. Hotate, “Proposal of Brillouin optical correlation-domain reflectometry (BOCDR),” Opt. Express 16(16), 12148–12153 (2008). 6. M. G. Kuzyk, Polymer Fiber Optics: Materials, Physics, and Applications (CRC Press, 2006). 7. K. Nakamura, I. R. Husdi, and S. Ueha, “A distributed strain sensor with the memory effect based on the POF OTDR,” Proc. SPIE 5855, 807–810 (2005). 8. N. Hayashi, Y. Mizuno, D. Koyama, and K. Nakamura, “Measurement of acoustic velocity in poly(methyl methacrylate)-based polymer optical fiber for Brillouin frequency shift estimation,” Appl. Phys. Express 4(10), 102501 (2011), doi:10.1143/APEX.4.102501. 9. N. Hayashi, Y. Mizuno, D. Koyama, and K. Nakamura, “Dependence of Brillouin frequency shift on temperature and strain in poly(methyl methacrylate)-based polymer optical fibers estimated by acoustic velocity measurement,” Appl. Phys. Express 5(3), 032502 (2012), doi:10.1143/APEX.5.032502. 10. Y. Mizuno and K. Nakamura, “Experimental study of Brillouin scattering in perfluorinated polymer optical fiber at telecommunication wavelength,” Appl. Phys. Lett. 97(2), 021103 (2010), doi:10.1063/1.3463038. 11. Y. Mizuno, M. Kishi, K. Hotate, T. Ishigure, and K. Nakamura, “Observation of stimulated Brillouin scattering in polymer optical fiber with pump-probe technique,” Opt. Lett. 36(12), 2378–2380 (2011). 12. Y. Mizuno, T. Ishigure, and K. Nakamura, “Brillouin gain spectrum characterization in perfluorinated gradedindex polymer optical fiber with 62.5-μm core diameter,” IEEE Photon. Technol. Lett. 23(24), 1863–1865 (2011). 13. Y. Mizuno and K. Nakamura, “Potential of Brillouin scattering in polymer optical fiber for strain-insensitive high-accuracy temperature sensing,” Opt. Lett. 35(23), 3985–3987 (2010). 14. G. P. Agrawal, Nonlinear Fiber Optics (Academic Press, 1995). #171301 $15.00 USD Received 27 Jun 2012; revised 9 Aug 2012; accepted 26 Aug 2012; published 30 Aug 2012 (C) 2012 OSA 10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 21101 15. T. Horiguchi, T. Kurashima, and M. Tateda, “Tensile strain dependence of Brillouin frequency shift in silica optical fibers,” IEEE Photon. Technol. Lett. 1(5), 107–108 (1989). 16. Y. Mizuno, Z. He, and K. Hotate, “Distributed strain measurement using a tellurite glass fiber with Brillouin optical correlation-domain reflectometry,” Opt. Commun. 283(11), 2438–2441 (2010). 17. L. Zou, X. Bao, S. Afshar V, and L. Chen, “Dependence of the Brillouin frequency shift on strain and temperature in a photonic crystal fiber,” Opt. Lett. 29(13), 1485–1487 (2004). 18. M. Nikles, L. Thevenaz, and P. A. Robert, “Brillouin gain spectrum characterization in single-mode optical fibers,” J. Lightwave Technol. 15(10), 1842–1851 (1997). 19. O. Frank and J. Lehmann, “Determination of various deformation processes in impact-modified PMMA at strain rates up to 10%/min,” Colloid Polym. Sci. 264(6), 473–481 (1986). 20. R. Hill, The Mathematical Theory of Plasticity (Oxford U. Press, 1950).
Introduction
Due to their light weight, small diameter, immunity against electro-magnetic noise, etc., optical fiber sensors have been increasingly required for monitoring diverse civil structures, such as buildings, dams, levees, bridges, pipelines, tunnels, and aircraft wings.Above all, Brillouin scattering-based fiber-optic sensors have been extensively studied because they can measure strain/temperature distribution along the fibers [1][2][3][4][5].Up to now, only glass optical fibers (GOFs) have been used for their sensor heads, but they are quite fragile and cannot withstand strains of over several %.As one way to solve this problem, employing polymer optical fibers (POFs) in such Brillouin sensors has attracted considerable attention, which has extremely high flexibility and can withstand > 50% strain [6].Besides, POFs have a unique feature called the "memory effect" [7], with which the information on the applied large strain can be stored due to their plastic deformation.
Commercially-available POFs are classified into two types: poly(methyl methacrylate)based (PMMA-) POFs and perfluorinated graded-index (PFGI-) POFs.The former mainly transmit visible light at around 650 nm, whereas the latter transmit not only visible light but also telecom wavelength light at up to 1.55 μm.Brillouin scattering in PMMA-POFs has not been experimentally observed yet, because some of the optical devices required for the measurement are extremely difficult to prepare at visible wavelength.Therefore, we have developed a method to characterize the Brillouin properties in POFs using a so-called ultrasonic pulse-echo technique [8], and predicted that the strain dependence on Brillouin frequency shift (BFS) in PMMA-POFs is non-monotonic when applied strain is larger than ~10% [9].Meanwhile, Brillouin scattering in PFGI-POFs has already been experimentally observed, since various optical devices are available at telecom wavelength [10][11][12][13].The BFS dependence on strain in PFGI-POFs has been measured to be linear with a coefficient of −121.8MHz/%, but, in our previous experiment, the applied strain ranged merely from 0% to 0.8% [13].Putting their memory effect in perspective, the BFS dependence on large strain in PFGI-POFs needs to be clarified.
In this study, we experimentally investigate the dependence of Brillouin gain spectra (BGS) on large strain of up to 20% in a PFGI-POF.The BFS dependence is found to be nonmonotonic, which agrees with our prediction concerning PMMA-POFs [8].By exploiting this non-monotonic nature, temperature sensing even with zero strain sensitivity will be feasible.As for the Stokes power, it is found to decrease drastically when the applied strain is larger than 10%.We show that this behavior is well explained by the propagation loss dependence on large strain.Utilizing the Stokes power dependence is one way to solve the problem of how we should identify the applied strain, when it is difficult only by BFS due to its nonmonotonic nature.
Principle
When a light beam is propagating in an optical fiber, it interacts with acoustic phonons and generates a backscattered light beam called the Stokes light [14].This phenomenon is known as Brillouin scattering, and the Stokes light spectrum is called the BGS.The center frequency of the BGS is known to be down-shifted from that of the incident light; the amount of this frequency shift ν B , called the BFS, is given as In Eq. ( 1), n is the refractive index, v A is the acoustic velocity in the fiber, and λ is the wavelength of the incident light.
If strain (or temperature change) is applied to the fiber, the BFS shifts toward higher or lower frequency depending on the fiber core material, which is the basic principle of fiberoptic Brillouin sensors.The BFS dependence on strain has been investigated for a variety of optical fibers.They include silica single-mode fibers (SMFs) [15], tellurite glass fibers [16], germanium-doped photonic crystal fibers (PCFs) [17], and PFGI-POFs [13], the strain coefficients of which are reported to be + 580, −230, + 409 (main peak), and −122 MHz/%, respectively.Here, we should note that the value for the PFGI-POF is valid only for strain of < ~1%.
As well as the BFS, the Stokes power is also strain-dependent; for instance, the Stokes power in silica SMFs is known to decrease with the increasing strain [18].The Stokes power (or the Brillouin gain coefficient) is a function of many structural quantities [14], one of which is the effective length L eff defined as where α is propagation loss and L is the fiber length.
Experimental setup
We employed a 1.27-m-long PFGI-POF as a fiber under test (FUT), which had numerical aperture (NA) of 0.185, core diameter of 50 μm, cladding diameter of 750 μm, core refractive index of ~1.35, and propagation loss of ~250 dB/km at 1.55 μm.The experimental setup for investigating the BFS dependence on large strain in the PFGI-POF was basically the same as that previously reported in [10], where the BGS can be observed with a high resolution (3 MHz in this experiment) by self-heterodyne detection.One end of the PFGI-POF was buttcoupled to a silica SMF via an SC connector, and the other end was guided to an optical power-meter.Polarization state was adjusted for each measurement with polarization controllers so that the Stokes power may be maximal.Different strains of up to 20% were applied to the whole length of the PFGI-POF fixed on two translation stages.The temperature was kept at 27 °C for all the measurements.
Experimental results
Figure 1 shows a stress-strain curve of a 0.1-m PFGI-POF of the identical type, which was obtained with the same method (strain-applying speed: 100 mm/min) as in Ref [9].The crosssectional area was assumed to be constant during the measurement.Fracture strain of the PFGI-POF was 71% and its elastic-plastic transition was apparently induced at several % strain [19].The initial peak up to ~10% indicates the elastic-to-plastic transition [20].The measured BGS dependence on large strain of up to 18.3% in the PFGI-POF is shown in Fig. 2(a).It took tens of seconds to manually apply a specific strain to the PFGI-POF; then, a few minutes later, the BGS measurement was performed.When the strain was 2.6%, a small peak was clearly observed at approximately 2.8 GHz.This peak was caused by a 6-cm portion of the PFGI-POF end connected to the silica SMF, to which proper strain was not applicable.From this measurement, the dependences of the BFS and the Stokes power on large strain can be plotted as shown in Figs.2(b) and (c), respectively.In Fig. 2(b), the BFS dependence on strain was non-monotonic; with the increasing strain, the BFS shifted at first toward lower frequency (0-2.6%)(this shift agrees well with the result under small strain [13]), then toward higher frequency (2.6-8.1%), and finally became almost constant (8.1-18.3%).This behavior may be caused by the Young's modulus dependence on large strain [13].In Fig. 2(c), with the increasing strain, the Stokes power decreased, and its reduction grew drastic when the strain was over ~10%.At ~20% strain, the Stokes power became so low that the target BGS was buried by the noise, i.e., by the BGS of the small portion of the PFGI-POF without strain applied.The fluctuations in the Stokes power were caused by the unstable polarization state.The dependence of the Brillouin linewidth on large strain is also a significant property, but we did not evaluate it because the Stokes power was so small that its fair measurement was not feasible.The repeatability of the results above has been confirmed by performing the same measurements for other samples.To clarify the origin of the Stokes power dependence on strain given in Fig. 2(c), the propagation loss was also measured as a function of strain as shown in Fig. 3(a).The loss drastically increased when the strain was over ~10%.Then, based on this figure, the strain dependence of the effective length was calculated using Eq. ( 2), which is given in Fig. 3(b).With the increasing strain, the effective length started to decrease drastically when the strain was over ~10%, which is in good agreement with the Stokes power dependence.In Fig. 3(b), when the strain was smaller than 5%, the effective length was slightly increased with the strain, because the actual fiber length was elongated owing to the strain and it compensated for the increase in the loss.This behavior is different from that in the Stokes power dependence in Fig. 2(c), but it is valid if we consider that the small peak at ~2.8 GHz caused by the small portion of the PFGI-POF was overlapped with the target BGS at such small strains.Thus, we presume that the reduction of the Stokes power in the PFGI-POF with strain is attributed to that of the propagation loss.By exploiting these unique Brillouin features of PFGI-POFs, some useful devices and systems will be developed.For instance, the BFS in a PFGI-POF to which 10-15% strain is applied has no strain dependence, and consequently it may be used for temperature sensing with almost zero strain sensitivity.We must note that, due to the non-monotonic nature of BFS, the identification of the applied large strain is sometimes difficult only by BFS.We may solve this problem by applying ~2.6% strain beforehand and/or by using the Stokes power dependence on strain as well.
Conclusion
By applying large strain of up to 20%, the strain dependence of BGS in the PFGI-POF was measured in detail.The BFS exhibited a non-monotonic nature, with which temperature sensing even with zero strain sensitivity will be feasible.The Stokes power drastically dropped when the applied strain was larger than ~10%.This behavior is probably caused by the large-strain dependence of the propagation loss.The identification of the applied strain is sometimes difficult only by BFS due to its non-monotonic nature; in that case, using the Stokes power dependence will be one of the solutions.Since this nature may vary depending on the time of applying strain or the types of the POFs (PMMA-POFs, partially-chlorinated POF, etc.), further investigation is required on this point.We believe that these results are of great significance in developing large-strain sensors based on Brillouin scattering in POFs.
Fig. 3 .
Fig. 3. Measured large-strain dependences of (a) the propagation loss and (b) the effective length in the PFGI-POF. | 3,239.6 | 2012-09-10T00:00:00.000 | [
"Materials Science",
"Physics"
] |
MANUFACTURING PLANNING AND CONTROL SYSTEM USING TOMOGRAPHIC SENSORS
The article presents an idea of a production process control system. Advanced automation and control of production processes play a key role in maintaining competitiveness. The proposed solution consists of sensor networks for measurement process parameters, production resources and equipment state. The system uses wired and wireless communication, which gives possibility to acquisition data from existing in enterprise sensors and systems as well as acquisition data from new systems and sensors used to measure all processes, starting from production preparation to the final product. The solution contains process tomography sensors based on electrical capacitance tomography, electrical impedance tomography and ultrasound tomography. The use of tomographic methods enables to manage the intelligent structure of the companies in terms of processes and products. Industrial tomography enables observation of physical and chemical phenomena without the need to penetrate inside. It will enable the optimization and autooptimization of design processes and production. Such solutions can operate autonomously, monitor and control measurements. All sensors return to the system continuous data about state of processes in some technologically closed objects like fermenters. Process tomography can also be used to acquisition data about a flow of liquids and loose ingredients in pipeline based on transport systems. Data acquired from sensors are collected in data warehouses in order to future processing and building the knowledge base. The results of the data analysis are showed in user control panels and are used directly in the control of the production process to increase the efficiency and quality of the products. Control methods cover issues related to the processing of data obtained from various sensors located at nodes. Monitoring takes place within the scope of acquired and processed data and parameter automation.
Introduction
First Information Technology systems in the world were used in enterprises in 1960's when early computers appeared. Firstly, systems cover small parts of operating space, like ROP systems (Reorder Point), which were charge of replenishment of stock when inventory drops down to zero. Later, systems grow up, MRP (Material requirement planning) was built with cooperation between J.I. Casemanufacturer of tractors and IBM. Those kinds of systems were charge of planning material demand in production process. Next step, MPC systems (Manufacturing planning and control) were built. Those systems gave the possibility of collecting data in storages. Of course, from our point of view every system was very primitive, technological limitations like small amount of memory, mass storage problems -we didn't know hard drives, we knew magnetic tapes, lack of processorswhich were slow, had limitations, for example first processors couldn't calculate square root. Systems were very expensive and big, difficult to use, they needed a lot of stuff who had mastered a technique [10]. In 70's systems and IT companies were developing. In 1972 five IBM's workers founded SAP company, and one year later completes its first accounting system [24]. In August 1977 was founded Oracle, in 1979 they started selling first relational database systems (in fact it was version number two) [4]. In 80's works were conducted on second generation of systems, like MRP-II (Manufacturing resource planning). MRP-II included stock reporting as well as production and procurement scheduling and cost reporting. New abbreviation CIM (computer integrated manufacturing) was introduced in the middle of 80's. The CIM model introduced a concept of integration all kinds of electronic data processing application in all enterprise's divisions connected with production, starting from design department to quality control, although implementation often failed due to complexity, lack of standardization and technologies [17]. Next type of IT systems is called ERP (Enterprise resource planning). The term was used by the Gartner Group in 1990 first time. ERP systems attempts to integrated processes and data in organization. The data are stored in a single database. The database stores, shares and manages data from different departments [6]. In practice the name ERP is more ambition than system really is, because usually they are administrative, automation accountant tasks and material management systems. Gap between ERP and lower level systems like MRP-II in CIM model was filled by MES systems (Manufacturing execution system). First time term MES was promulgate in 1992 by AMR Research Inc. The MES systems are based on functionality for planning, execution and control production process, as well as act MES must react in real time in case of interrupt. MES systems were overtaken by ERP, because ERP systems are helping business executive with financial decisions. The plant level executives, who had to take important decisions, can't rely on business information. Crucial data comes from the plant, and plays significant role in process optimization, thus MES systems are becoming important software in manufacturing ecosystems [28]. In recent years technology has evolved. We have available new equipment and software. Almost everyone has a smartphone in their pocket, a computer and a tablet on their desk. We can use cheap sensors and hardware, RFID, Beacons, Internet of Things are very fashionable words. We have access to Cloud Computing technology which offer on demand almost unlimited computing power and storage space, but also, we can use cognitive services which can be helpful to process huge amount of data. The whole electronic world is more accessible, it gives new possibilities in building MES systems. We can collect data from large number of various sensors, Cloud computing and Big Data allows us to store and process collected data. Increase technology also has a big impact on the development of process tomography. We can build smaller, cheaper and more useful device as well as smarter, faster and more accurate algorithms. At the same time, customers' requirements are constantly growing. They needs customized products, shorter production time, possibility to making changes, higher quality and lower price. World continuously changing, then we have to build systems significantly more elastic and more responsive for customers' needs. The above expectations and possibilities can meet in a common point.
Fig. 1. Manufacturing control model
In this paper we propose production control system that will meet users' and management's needs, will be able to collect the right data, process it using the latest solutions, and return the results to both processes and users. The distributed and open architecture of the solution should allow use of a wide range of sensors and measuring and executive devices, including process tomography systems. The solution should also enable dynamic adaptation to changing conditions and needs e.g. the need to add new sensors.
System background
According to Baker, factory control is charge of conduct production process to make desired product from manufacturing resources and information. Manufacturing control model was presented on [1]. Baker wrote that system should decide what to produce, how much to produce, when production is to be finished, what resources to use, how and when use them and make them available, when to release jobs into the factory, which job to release, job routing, and operation sequencing [1]. Single central factory controller has difficulty to deal with complexity coming from production system (e.g. complexity of data management, uncertainty connected with demand and resources availability, lags between events and relevant information processing, real-time constraints). Popular solution for these kinds of problems is separation of responsibilities into several decisional entities, introducing a non-centralized control system. In our case non-centralized means division of a global control process based on a selected splitting criterion e.g. functional [27]. Functional horizontal division was defined by Association of German Engineers, that works on MES Systems in details. In their guideline VDI 5600 they define 3-levels manufacturing company structure. As can be seen from, business management is represented by ERP systems, production management is represented by MES system, and finally production level is represented by workplaces, machines and equipment. Manufacturing execution systems are subject of work for a lot of organizations. They are trying to specify standards which can help implement system in manufacturing enterprises. To define common functionality, MES functions specified by different standards was presented in Table 1 [25]. There was selected four standards: Manufacturing Execution Solution Association MESA [16], Normenarbeitsgemeinschaft für Meß und Regeltechnik in der chemischen Industrie NAMUR [18], VDI-Kompetenzfeld Informationstechnik [11], National Institute of Standards NIST [2].
As you can see on the Table 1, all considered standards have defined areas connected with detailed planning, quality management, master data management. Other functionality is described in three or less number of specification. This may indicate that these functions are specific to selected industries. The MES system occupies a central position in a manufacturing company. According to VDI, MES is charge of nine areas: detailed planning and control, information management, quality management, human resources management, production facility management, performance analysis, data collection and material management. MES is capable of exchanging information between the business level and the production level. The production level includes complex machines and subsystems for production tasks, which are involved in production process. Production equipment generate data which MES system uses to represent the actual state. From this point of view, production control system should integrate equipment, machines and exchange data in way which enable control over production subsystems [11].
Distributed system
Manufacturing companies often have very complex and complicated operational processes, additionally it happens that operations are extended over many partnership companies which make smaller parts of products or plays role in a part of production processes. To stay on the market firms should change dynamically products, and have always fast reaction to disturbances. This need forces the use of systems with high level interactions and integration, at the same time with great flexibility for changes in configuration or components. This requirement meets distributed systems sometime called collaborative automated production systems. These systems have several special features [13]: A complex problem is divided into several small problems, using a distributed approach, with the development of intelligent building blocks, i.e. control units. Each control unit is autonomous, having its own objectives, knowledge and skills, and encapsulating intelligent functions, however, none of them has a global view of the system. The global decisions (e.g. the scheduling, monitoring and diagnosis) are determinate by more than one control unit, i.e. the control units need to work together, interacting in a collaborative way to reach a production decision. Some control units are connected to physical automation devices, such as sensors, robots and CNC machines.
We have a number of tools to implement this kind of systems. Agents, holons, webservices and microservices can be used. As communication protocols very popular are JSON (JavaScript Object Notation) [5] and XML (eXtensible Markup Language) [29], and for aggregate events MQTT (MQ Telemetry Transport) [9] protocol can be used. Agents are autonomous, intelligent, adaptive and cooperative objects, the most suitable definition for agents is: "An autonomous component that represents physical or logical objects in the system, capable to act in order to achieve its goals, and being able to interact with other agents, when it does not possess knowledge and skills to reach alone its objectives" [14]. The word "holon" was proposed by Koestler in 1971. This is combination of Greek words "holos" and "on". Greek word "holos" meaning whole and the Greek suffix "on" meaning particle od part like proton or neutron. Holon can function as part another object and at the same time it is autonomous whole [8]. Holonic manufacturing system idea was proposed in 1990. HMS contains autonomous building blocks called holons, They have ability to transforming, transporting, storing and validation information or physical objects. Holons cooperate each other during the process, in order to achieve production goals. The holon in HMS must enable features [7]: autonomy: Each holon must be able to create, control and monitor the execution of its own plans and/or strategies, and to take suitable corrective actions against its own malfunctions, cooperation: Holons must be able to negotiate and execute mutually acceptable plans and take mutual actions against malfunctions, openness: The system must be able to accommodate the incorporation of new holons, the removal of existing holons, or modification of the functional capabilities of existing holons, with minimal human intervention, where holons or their functions may be supplied by a variety of diverse sources.
Web service is data source or application accessible via HTTP (HypeText Transfer Protocol) or its encrypted version HTTPS (HyperText Transport Protocol Secure) protocols. Opposite to WebApplication webservices are designed to communicate with other programs, not users. The most popular webservice protocol is SOAP (Simple Object Access Protocol)add a header to XML message before it is transferred over HTTP [3].
Microservice is module which supports a specific business goal, and uses well defined interface to communicate with other sets of services. Microservices architecture is an approach to developing a single application as a set of small services, each running in its own process and communicating with lightweight protocol [21].
System idea
Whole system idea was presented on Fig. 5. According to VDI 5600 guideline, system is divided horizontally in three main operational levels. The first level contains business systems, working in manufacturing company, it includes customer web portal, external information systems like ERP, web and mobile platform. First level already contains internal information systems like mobile production supervising application and other systems used in the enterprise. Business applications communicate with second level application via data services like WebApi, WebServices, WCF services, with use any communication formats and protocols like JSON, XML MQTT etc.
Data services should be projected as façade for master data service, this will allow to add any chosen communication method. Second level a larger number of elements. The start components of the system are Goals agent. The primary functions of Goals agent are asking data store about commissions queue, checking resources availability, knowledge about products and finally send commissions to Automation and customization product service. Goals agent ask Expert system service about expert's knowledge about product's production process. Expert system collects expert knowledge, questions and answers about production process. This data will be used in future to optimization production process. Goals agent also takes data from supplier's databases, HR system, warehouse system and data about production needs. Automation and customization product service prepare production process, divide commissions to smaller atomic parts and send to Plant floor objects. Plant flor objects we can define as services connected with physical factory equipment like sensors, actuators, PLC, CNC machines as well as data readers from transport equipment, measurement systems, RFID gates etc. As a plant floor object could be installed process tomography equipment to measure process parameters in closed production objects like pipelines or chemical reactors. Automation and customization module consists products reference models and adaptive control subsystem. The task of the automation module is to control the manufacturing process and receive process data. Collected data are used to adjust the control system settings. Intelligent measurement module is used to extend the measuring system with additional sensors and devices. This is area where process tomography equipment will be used. Additionally, could be used sensors which are important from production management and business management point of view, but which were not prepared by devices' manufacturers. This module contains measurement subsystem and intelligent sensors controller. Effect agent should collect data about production results, this data is collected in big data service, and after process will be use in future. The analytical engine is based on computational intelligence algorithms. This module takes data, process them, and returns as synthetic information ready to use in future processing. The last described module is communication module. This system's part is charge of data exchanging between production system and external applications like web applications, ERP systems, mobile applications as well as IT systems for control and supervisor production. The idea to use tomography to monitoring industrial processes emerged in the nineties. Since then, various tomographic methods have appeared, including: electrical tomography, magnetic, optical, ultrasonic, microwave and radioactive tomography. This concept requires new data processing strategies and a correct extension of the classical control theory, because the latter is not developed enough for a large amount of sensor data and must be created on a non-parametric criterion [12,15,19,20,22,23,26,30]. Industrial tomography applications are usually a challenge for obtaining spatial distribution data from observations beyond the boundary of the process. Sensor networks with the feedback loops are the fundamental elements of the production control. Here, the future belongs to distributed sensors and imaging (Fig. 6).
Examples
Algorithms manual control and automatic cover issues related to the processing of data obtained from various sensors located at key nodes of the system. Supervision and control is in the range of acquired and processed data and device parameters implementing automation such as servo valves, pumps supply and rotary flow, etc. The primary / main feature of the use of methods of wireless is to obtain important information about the process and the state of the installation in real time by persons having strategic importance in the management and technical supervision. The Solution of the diagram of a multiphase flows system is presented in Fig. 7. Figure 8 shows the example of the image reconstruction by ECT, UT and EIT.
Conclusion
In this work, there was presented the control and steering in cyber-physical system based on an idea of a production process control system and tomographic sensors. They were made virtual arrays equipped with active elements, control algorithms of manual and automatic. Running an application for processing and data obtained from various sensors disposed at key nodes installation. Supervision and control is in the range of acquired and processed data and parameters of devices implementing automation. A new science will define new mathematical foundations with formalisms to specify, analyze, verify and validate systems that monitor and control physical objects and entities. This system includes new measurement techniques and designs innovative smart measuring devices. The application structure covers a communication interface, unique algorithms for optimization and data analysis algorithms for image reconstruction and process monitoring | 4,148.2 | 2018-09-25T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Coherent perfect absorption of nonlinear matter waves
Eliminating atoms of a Bose-Einstein condensate in a lattice from one cell is coherent perfect absorption of the quantum liquid.
INTRODUCTION
Coherent perfect absorption (CPA) is the complete extinction of incoming radiation by a complex potential embedded in a physical system supporting wave propagation. The phenomenon is based on destructive interference of transmitted and reflected waves. The concept was introduced (1) and observed experimentally (2) for light interacting with absorbing scatterers. CPA was also reported for plasmonic metasurfaces (3), graphene films (4), and sound waves (5). Technologically, CPA is used to design switching devices (6) and logic elements (7), in interferometry (2), and in many other applications (8). All these studies deal with perfect absorption of linear waves. Here, we extend the paradigm to a CPA of nonlinear waves and experimentally demonstrate it for matter waves with an atomic Bose-Einstein condensate (BEC). Conditions for CPA of matter waves can be satisfied easier than for its linear analogs because the strength of two-body interactions offers additional freedom for control. The observation of CPA of nonlinear matter waves paves the way toward a much broader exploitation of the phenomenon in nonlinear optics, exciton-polariton condensates, acoustics, and other areas of nonlinear physics.
CPA is a delicate phenomenon requiring precise tuning of the absorber and of the relative phases of the incoming waves. When the respective conditions are met for a particular wave vector, the radiation incident from both sides is completely absorbed (Fig. 1A). CPA can be viewed as a time-reversed process to lasing (1,2), where the absorber is replaced by a gain medium and only outgoing radiation exists for a given wave vector. This time-reversed process is related to the mathematical notion of a spectral singularity (9), that is, to a wave vector at which the system can emit radiation with none incoming (10,11). Therefore, in the scattering formalism, the wavelength at which CPA occurs is called a time-reversed spectral singularity.
Recently, the concept of nonlinear CPA was introduced in optics (12-15). Such a device represents a nonlinear absorbing slab, sometimes with nontrivial composite internal structure (16,17), which is embedded in a linear medium and, thus, perfectly absorbs incident linear waves. Since, however, the propagating medium can itself be nonlinear, as is the case for an optical Kerr nonlinearity or an interacting BEC, the natural (and still open) question arises about the physical meaning of CPA in a nonlinear medium. In other words, what are the scattering properties of nonlinear waves interacting with a linear absorbing potential?
The implementation of CPA in a nonlinear medium offers several challenges that raise doubts about whether such a phenomenon can exist and whether it is physically meaningful. In such a setting, the "linear" arguments do not work: There is no well-defined transfer matrix connecting left and right incident waves [problems, where either the incident or the transmitted radiation is given, have different solutions (18)]; because of interactions among the modes, there exists no interference in the linear sense, and results on scattering of monochromatic waves do not give any more an answer on the scattering of wave packets used in experiments. Moreover, even if CPA can exist in a nonlinear medium, its realization is still questionable. There exists no general method of computing the system parameters, like the zeros of the transfer matrix elements in the linear case. Thus, tuning system parameters in situ might be the only possibility to realize CPA in nonlinear media. Furthermore, even the realization of plane waves may be practically impossible due to instabilities ubiquitous for nonlinear systems.
Here, we show that all the above challenges can be overcome: CPA for nonlinear waves does exist, can be observed experimentally, and can even be more easily achieved because of intrinsic nonlinearity. Theoretical indication for such CPA stems from the existence of stable constant amplitude currents in a nonlinear waveguiding circle with equal absorbing and lasing potentials (19). Experimental indication comes from recent experiments on driven dissipative Josephson systems (20).
RESULTS
Here, we consider an atomic BEC residing in a periodic potential, realized by an optical lattice. The superfluid nature of the BEC allows for tunneling between the wells, while interatomic collisions lead to an intrinsic nonlinearity. One of the wells is rendered absorptive by applying an electron beam (21), which removes atoms from that well. The effective experimental system is sketched in Fig. 1B: A well with linear absorption, embedded between two tunneling barriers, is coupled at both ends to a nonlinear waveguide. For an introduction to the experimental techniques for manipulating ultracold atoms in optical potentials, the reader is referred to (22). Experimental details of the optical lattice, the preparation of the BEC, and the experimental sequence are given in Materials and Methods. The depth of the periodic potential and the number of atoms N in each lattice site (N ≈ 700) are chosen, such that we can apply the tight-binding approximation of mean field dynamics, when the condensate is described in terms of the density amplitudes y n (t) (23,24) iℏ dy n dt ¼ ÀJðy nÀ1 þ y nþ1 Þ þ Ujy n j 2 y n À i ħg 2 Here, n enumerates the lattice sites, and g ≥ 0 describes the dissipation strength applied to the site n = 0. Theoretically, such a system was first considered in (25) within the framework of the Gross-Pitaevskii equation. Later on, it was treated within the Bose-Hubbard model (26,27). Current states in BECs in the presence of dissipation and external drive were also studied theoretically in (19,28) and experimentally in (20,29,30).
Since CPA is a stationary process, we look for steady-state solutions of Eq. 1 in the form y n (t) = e −i(m/ħ)t u n , where all u n are time-independent and m is the chemical potential. First, we revisit the linear case corresponding to noninteracting atoms,mũ n ¼ ÀJðũ nÀ1 þũ nþ1 Þ À iðħg=2Þũ n d n0 , where we use tildes to emphasize the limit U = 0. In the absence of dissipation, that is, at g = 0, the dispersion relation in the tightbinding approximation readsm ¼ À2Jcosq, where q ∈ [0, p] is the wave number. When dissipation is applied at n = 0, we consider the leftũ L n ¼ a L e iqn þ b L e Àiqn for n ≤ −1 and the rightũ R n ¼ a R e iqn þ b R e Àiqn for n ≥ 1 solutions, where a L and b R (a R and b L ) are the incident (reflected) waves from left (L) and right (R), respectively (see Fig. 1). The transfer 2 × 2 matrix M with the elements M ij (q) is defined by the relation (a R , b R ) T = M(a L , b L ) T , where T stands for transpose. Computing M (see Materials and Methods), one verifies that, for q = q ⋆ and q ¼ q 1 the element M 11 vanishes [M 11 (q ⋆ ) = 0], while the other elements become M 21 (q ⋆ ) = − M 12 (q ⋆ ) = 1 and M 22 (q ⋆ ) = 2. At these wave numbers, the problem admits a solution consisting of only incident waves, that is, a L = b R and b L = a R = 0. Thus, two CPA states occur for slow (q = q ⋆ ) and fast (q ¼ q 1 ⋆ ) matter waves. The points q ⋆ andq 1 ⋆ are called time-reversed spectral singularities.
If instead of eliminating, one coherently injects atoms into the site n = 0, that is, g < 0, Eq. 1 admits spectral singularities q ⋆ ¼ Àq ⋆ and the solution a R = b L and a L = b R = 0 describes coherent wave propagation outside the "active" site, corresponding to a matter-wave laser. Since the change g → − g in Eq. 1 is achieved by applying the Wigner time reversal operator T : T Yðr; tÞ ¼ Y*ðr; ÀtÞ where Y is an order parameter of the BEC, a coherent perfect absorber corresponds to a time-reversed laser (1).
The CPA solutions of Eq. 1 for linear waves have the following properties: They exist only for dissipation rates with g ≤ g th = 4J/ħ. The amplitude of the absorbed waves is constant in all sites, including the site where atoms are eliminated, and the group velocity at q ⋆ is directly set by the decay rate: Bearing these properties in mind, we now turn to the nonlinear problem, setting U > 0 (repulsive interactions among the atoms). We search for a steady-state solution of Eq. 1 with a constant amplitude r in each lattice site. The requirement for the existence of only left and right incident waves can be formulated as u L n ¼ re iqn for n ≤ −1, u R n ¼ re Àiqn for n ≥ 1, and u 0 = r. This fixes m = − 2J cos q + Ur 2 , and the matching conditions at n = 0 imply that the steady-state solution exists only if q = q ⋆ (and q ¼ q 1 ⋆ ) given by Eq. 2. Thus, we have obtained CPA for nonlinear matter waves, which still corresponds to the time-reversed laser. Indeed, as in the linear case, replacing the dissipation with gain (that is, inverting the sign of g), one obtains the constant amplitude outgoing-wave solution: u L n ¼ re Àiq ⋆ n for n ≤ −1 and u R n ¼ re iq ⋆ n for n ≥ 1.
One essential difference between linear and nonlinear CPA is particularly relevant for the experimental observation of the phenomenon: the stability of the incoming superfluid currents. The stability analysis (see Materials and Methods) shows that the nonlinearity qualitatively changes the result: Only slow currents (q = q ⋆ ) can be perfectly absorbed, while the fast nonlinear currents The CPA solution is mathematically valid only in the infinite lattice because the absorption at the center must be compensated by steady particle fluxes incoming from infinity. However, the CPA phenomenon is structurally robust and can be observed as a quasi-stationary regime even in a finite lattice. To demonstrate this, we numerically simulated Eq. 1 with about 200 sites. The initial condition corresponds to the ground state of a BEC with an additional small harmonic confinement along the lattice direction. Figure 2 (A and B) shows the obtained behavior for dissipation strengths g below (A) and above (B) the CPA breaking point g th . Figure 2A shows that the solution rapidly enters a quasi-stationary regime where its density remains constant in space and is only weakly decaying in time due to an overall loss of atoms in the system. Above the breaking value (Fig. 2B), a strong decay sets in and the atomic density is not homogeneous in space any more. An important feature of CPA is the balanced superfluid currents toward the dissipative site, characterized by the distribution of phases, illustrated in Fig. 2F. The CPA regime manifests itself in the L-shaped phase profile whose slope is q ⋆ for negative n and −q ⋆ for positive n. This phase pattern is completely different when the system is not in the CPA regime: It is nearly constant, showing weak nonmonotonic behavior for positive and negative n, with a large jump at the central site.
Together with the numerical simulation, we also show in Fig. 2 the corresponding experimental results. The experimentally measured filling level of the dissipative site shows very good agreement with the numerical simulations: The atom number in the dissipated site is constant in time ( Fig. 2A) and equal to all neighboring sites (Fig. 2C). This steady state is the experimental manifestation of CPA of matter waves. The CPA solution is established also for other values of the dissipation strength. This highlights the fact that the nonlinearity together with the dissipation generates an effective attractor dynamics toward the CPA solution. Increasing the dissipation above a critical value leads to a qualitative change in the behavior (Fig. 2, B and D). In accordance with the theoretical prediction, the occupation in the dissipated site rapidly drops and stays small. Hence, CPA can, indeed, only be observed in a finite parameter window.
The theoretical results predict the transition from CPA and the non-CPA regime at g th = 4J/ħ, above which no quasi-steady state is established anymore. In the experiment, a qualitatively similar situation occurs (Fig. 3). However the CPA regime breaks down at a lower dissipation rate of g exp ≈ J/ħ. This can be explained by two factors. First, the transverse extension of each lattice site, not fully accounted for by the tight-binding approximation (Eq. 1), makes the condensate vulnerable against transverse instabilities (31), which can develop at smaller g than predicted by Eq. 1. The second factor relates to the way the experiment is conducted. At t = 0, the condensate is loaded in a lattice and is characterized by the chemical potential, which is determined by the trap geometry and by the filling of each lattice site. When the elimination of atoms starts, the system is induced in an unstable regime. Quasi-stationary behavior is only possible if the chemical potential is not changed appreciably under the action of the dissipation. This requires the filling of the central site due to tunneling from the neighboring ones to be fast enough to compensate the loss of atoms. Thus, the tunneling time estimated as t tun~ħ /J should be of the order of or smaller than the inverse loss rate g −1 ; otherwise, the induced thermodynamical equilibrium at t = 0 cannot be compensated by the incoming superfluid currents, and the collective dynamics described by Eq. 1 cannot be established. This gives an estimate g~J/ħ for the threshold dissipation rate.
CONCLUSION
Our results present the proof of concept of the CPA paradigm for nonlinear waves. The experimental setting explored here can be straightforwardly generalized to BECs of other types such as spin-orbit-coupled, fermionic, and quasi-particle ones, and furthermore to other branches of physics, including nonlinear optics of Kerr media and acoustics. Our system can also be exploited as a platform for studying superfluid flows in a linear geometry (which is an alternative to most commonly used annular traps), as well as for understanding the fundamental role of Bogoliubov phonons in stabilizing quantum states. Since CPA can be viewed as time-reversed lasing, the reported experimental results pave the way to implementing a laser for matter waves, for which elimination of atoms from the central site should be replaced by injecting atoms. Furthermore, the observation of CPA in nonlinear media, and possible lasing of matter waves, can be viewed as an additional element for the rapidly developing area of quantum technologies based on atomtronics (32). The reported results also open the possibility of using CPA regimes in nonlinear optical circuits.
In the general context of scattering by dissipative potentials (33), given the fact that the atomic interactions can be tuned by a Feshbach resonance, a linear spectral singularity can be experimentally realized by starting from the nonlinear case and subsequently reducing the interactions to zero adiabatically. Such a scenario explicitly exploits the attractor nature of the CPA solution. Being an attractor in an essentially nonlinear system, CPA can serve as a mechanism to control superfluid flow parameters, such as the chemical potential, superfluid velocity, or sound velocity, in a particularly simple way.
MATERIALS AND METHODS
The transfer matrix To compute the transfer matrix M, we denoted the solution in the point n = 0 by u 0 and considered the equationmũ describing stationary currents, in the points n = 0 and n = ±1 using the explicit forms for the waves in the leftũ L n ¼ a L e iqn þ b L e Àiqn (n ≤ −1) and the rightũ R n ¼ a R e iqn þ b R e Àiqn (n ≥ 1) half-space. From the equation, with n = 0 and using the expression for the chemical potentialm ¼ À2Jcos q, we obtained u 0 With this expression, the equations at n = ±1 are transformed to a linear algebraic system, which is solved for the pair (a R , b R ), giving their expressions through (a L , b L ), thus determining the transfer matrix
Stability analysis
We analyzed the stability for a BEC within the framework of the discrete model iħ dy n dt ¼ ÀJðy nÀ1 þ y nþ1 Þ þ Ujy n j 2 y n À i ħg 2 y n d n0 ð6Þ and required that the left and right incident superfluid currents have to be stable. Their stability is determined by the stability of the corresponding Bogoliubov phonons on an infinite homogeneous lattice (that is, without applied removal of atoms). The stability of the homogeneous lattice is found using the substitution y n ðtÞ ¼ e Àiðm=ħÞtþiqn ðr þ v n e Àiwtþikn þ w n *e iw*tÀikn Þ ð7Þ where r > 0 characterizes the uniform density, and |v n |, |w n | ≪ r are small perturbations. Linearizing Eq. 6 (with g = 0) with respect to v n and w n , we found two dispersion branches Consider now a positive scattering length, U > 0, which corresponds to the experiments reported here. One can then identify the stability domain for Bogoliubov phonons and, hence, the stability of the superfluid current, requiring w ± to be real for the given q and all real k. This results in the constraint 0 ≤ q < p/2, that is, only slow currents are dynamically stable.
Experimental setup
We used a BEC of 87 Rb with about 45 × 10 3 atoms in a single-beam dipole trap realized by a CO 2 laser (maximum power, 10 W; beam waist, Fig. 2A), corresponding to the CPA regime. Above this value, the dissipation dominates the dynamics, and the filling level decays exponentially (compare Fig. 2B). The statistical error of the decay rate is smaller than the size of the points; however, we estimate a 5% systematic error due to technical imperfections such as drifts of the electron beam current. The error in the dissipation rate originates from the calibration measurement (see Materials and Methods).
30 mm). The condensate is cigar-shaped and has dimensions of 80 mm × 6 mm × 6 mm. We then loaded the BEC into a one-dimensional optical lattice created by two blue detuned laser beams (l = 774 nm; beam waist, 500 mm) crossed at an angle of 90°. The linear polarization of both laser beams was along the same direction, such that the interference pattern was maximally modulated. The resulting lattice has a period of d = 547 nm. The trap frequencies in a lattice site are n r = 165 Hz (transverse direction) and n z = 12 kHz (lattice direction). Each site contains a small, pancake-shaped BEC with about 700 atoms (value in the center of the trap). The total number of lattice sites is about 200. The lattice depth V 0 in units of the recoil energy E r = p 2 ħ 2 /(2md 2 ) (m is the mass of the atom) is given by V 0 = 10E r . An electron column, which is implemented in our experimental chamber, provides a focused electron beam, which is used to introduce a well-defined local particle loss as a dissipative process in one site of the lattice. To ensure a homogeneous loss process over the whole extension of the lattice site, we rapidly scanned the electron beam in the transverse direction (3-kHz scan frequency) with a sawtooth pattern. To adjust the dissipation strength g, we varied the amplitude of the scan pattern. An image of the experimental chamber together with a sketch of the optical trapping configuration is provided in fig. S1.
Why the lattice is necessary
As mentioned in the main text, the superfluid currents under localized dissipation were studied previously in inhomogeneous BECs (22,25), where no CPA was observed. Mathematical solutions of the Gross-Pitaevskii equation with localized dissipation describing such models can, however, be found. Such solutions are stable and have stationary amplitudes. Consider the stationary Gross-Pitaevskii equation with strongly localized dissipation modeled by the Dirac delta function g 0 d(x), where g 0 is a positive constant, without any optical lattice Here, g > 0. One can verify that the function is a solution of Eq. 9 with the chemical potential m ¼ gr 2 0 þ g 2 0 =4. This raises questions about the role of the optical lattice and about its necessity for realizing CPA experimentally. The answer resides in the way of exciting the CPA regime. A strictly homogeneous background density can only exist if the dissipation is point-like (described by the Dirac delta function d) and, thus, experimentally unrealistic. Any finitesize, even very narrow, dissipation generates Bogoliubov phonons at the instant it is applied. In the continuous model, the phonons can propagate with arbitrary group velocity, contrary to the lattice described in the tight-binding model. Switching on dissipation therefore induces an extended domain of the condensate in a dynamical regime, and fast matter waves propagating outward the dissipation domain cannot be stabilized by the incoming flows. Thus, the lattice, on the one hand, creates conditions where the dissipation is effectively point-like (that is, applied to a single cell) and, on the other hand, limits the group velocity of the phonons, allowing the establishment of the equilibrium state.
Details of numerical simulations
In the numerical simulations, we used the following model that corresponds to the Gross-Pitaevskii equation from the main text with an additional weak parabolic confinement an 2 y n , which models the optical dipole trap potential used in the experimental setup iħ dy n dt ¼ ÀJðy nÀ1 þ y nþ1 Þ þ Ujy n j 2 y n À igħ 2 d n0 y n þ an 2 y n ð11Þ The coefficient a determines the strength of the parabolic trapping and amounts to where m = 1.44 ⋅ 10 −25 g is the mass of the atom, d = 547 nm is the lattice period, and w = 2p ⋅ 11 Hz is the axial trapping frequency of the dipole trap. As in the main text, for other parameters, we have J/ħ = 229 s −1 and U/ħ = 2600 s −1 .
Forg ¼ 0, Eq. 13 has an approximate stationary ground-state Thomas-Fermi solution y n ¼ e ÀiðŨ r 2 À2Þt w n ; w n ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where the "discrete Thomas-Fermi radius" N TF is determined by the conditionŨ r 2 ÀãN 2 TF ¼ 0, and r 2 = 1 is the normalized background density, that is, |w n | 2 ≈ 1 in the central region. In our case, N TF ≈ 50. We solved Eq. 13 for y n (t) on the grid of 201 sites n = −100,…, 100, where n = 0 corresponds to the site with the losses and is subject to the zero boundary conditions y −100 (t) = y 100 (t) = 0.
For the initial condition, we used the ground-state Thomas-Fermi distribution from Eq. 14: y n (t = 0) = w n . As discussed in the main text, in the parametric range corresponding to the existence of the CPA, the initial condition rapidly evolves to the quasi-stationary CPA solution characterized by the almost uniform density in the central region (n = −10, …, 10), whereas, in the absence of the CPA regime, the initial condition rapidly develops strong instability in the central region.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/4/8/eaat6539/DC1 Fig. S1. Photograph of the vacuum chamber and sketch of the optical trapping scheme. S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E | 5,633.6 | 2018-08-01T00:00:00.000 | [
"Physics"
] |
A Fast Overlapping Community Detection Algorithm with Self-Correcting Ability
Due to the defects of all kinds of modularity, this paper defines a weighted modularity based on the density and cohesion as the new evaluation measurement. Since the proportion of the overlapping nodes in network is very low, the number of the nodes' repeat visits can be reduced by signing the vertices with the overlapping attributes. In this paper, we propose three test conditions for overlapping nodes and present a fast overlapping community detection algorithm with self-correcting ability, which is decomposed into two processes. Under the control of overlapping properties, the complexity of the algorithm tends to be approximate linear. And we also give a new understanding on membership vector. Moreover, we improve the bridgeness function which evaluates the extent of overlapping nodes. Finally, we conduct the experiments on three networks with well known community structures and the results verify the feasibility and effectiveness of our algorithm.
Introduction
Community structure is an important field in complex networks research. In the traditional social network, Newman et al. discover the community structure [1][2][3][4], which is a group of nodes with dense internal links and sparse connections between groups [3][4][5][6]. Later, from the metabolic networks [7] to the large-scale WWW webpage links [8], they are all community structures.
In the exploration of community structure, the crisp division [9,10] is put forth first by scholars; that is, a node belongs to only one community. In reality, networks are built in different relations, and nodes can be shared by many communities. For example, in human relationship, the relationship of two people may be family, friend, and colleague. When Palla et al. point out the overlap feature of the community [3], a number of soft division algorithms are designed to detect the overlapping community structure, and two main effective means are clique [3,[11][12][13][14] and optimization theory [14][15][16]. The methods based on clique have the high accuracy, but the process is complex. While the optimization algorithms choose the appropriate object function and get a lower complexity. But when and how to finish are ambiguous. Meanwhile, whether a vertex is the overlapping node is not a clear understanding in academic.
In this paper, we propose a fast overlapping community detection algorithm with self-correcting ability through the following contributions. First, we introduce new features of modularity as a new evaluation measurement and explore the advantage of the new weighted modularity in structure through combining the cohesive and density synthetically. Second, we propose three test conditions for overlapping nodes and present a fast overlapping community detection algorithm with self-correcting ability, which consists of two processes. Under the control of overlapping properties, the complexity of our algorithm tends to be approximate linear. Third, we give a new understanding on membership vector to improve the bridgeness function which evaluates the extent of overlapping nodes. To evaluate the feasibility and effectiveness, we implement our approach on three existing networks with the well-known community structures.
The rest of the paper is organized as follows. In Section 2, we describe the details of our new weighted modularity. In Section 3, we propose our fast overlapping community detection algorithm. In Section 4, we explore the estimation to the effect of overlapping node. The experimental results are discussed in Section 5. Finally, Section 6 concludes our work.
The Standard of Overlap.
The overlapping node is the vertex that belongs to more than one community. After the analysis on the well-known networks whose structures are also known, we propose three conditions to judge the overlapping node in this paper. In particular, all those are not disrelated. The priority is from the top down, and some nodes will be in accord with several conditions. If a node meets one of the conditions, it shall be an overlapping node.
Addition in
Modularity. This is the most common case.
Referring to the view of Lázár et al. [17], the node contribution is positive to their communities. The overlapping nodes link many adjacent vertex and belong to less communities (two is general), just as vertex Zds2 shown in Figure 1. In addition, if the adjacent nodes of overlap contain overlapping nodes, they can expand to overlapping region [17] as shown in Figure 2.
With the help of this region, all nodes in the sharing area make positive contribution to the belonging communities. The criterion is used in the variation caused by the nodes joining.
Strengthen the Internal Connection.
The analysis on the known networks reveals that some core nodes in community may reduce the holistic modularity. Though they have many adjacent nodes, the gain in inner connection is less than the outer connection, which results in the decrease of modularity. However, the internal connection of community is strengthened. In the sparse network, it performs as the addition in density besides the stationarity in modularity whose threshold value is 0.015 in Karate club network as shown in Figure 3. And in the dense network, the proportion of adjacent nodes in community is more than 1/3, according to the research on the Protein reaction network.
The Average Distribution of Belonging Factor.
In the networks, the belonging factors of some nodes belong to their communities impartially, which means the adjacent nodes are distributed to the communities averagely. If the degree of node is high, it can be in accord with the previous two cases, just as the Zds2 in Figure 1. Here, the average distribution is not absolute. To be in accord with the membership number, this paper sets a dynamic threshold. Calculating the absolute value of deviation with average distribution, if the sum is less than the dynamic threshold, it is all right. Definition 1. The average distribution of belonging factor should meet the condition in (1), in which is the belonging factor of node in community and is the membership number: In (1) accepted empirical value, the dynamic threshold is set as 1/2 . With the increasement in membership number, the threshold decreases gradually, and the average feature is more and more obvious in Figure 4.
The New Weighted Modularity.
Modularity is the measurement of evaluation and is the preferred object function. Considering various definitions of the past types [18], it summarizes three features after comparing the advantages and disadvantages of each type, which are the fairness, rationality, and independence.
There is the defect in the actual modularity [19], and it disobeys the fairness and rationality. As for the other cases, such as modularity based on node contribution [17] and the overlapping modularity, the weaknesses are discussed [18]. The fitness function [15] handles the inner and outer nodes of community, respectively, defined as (2). For a given community , the inner connection is in , the outer is out , and is the regulatory factor to control the size with the default value 1.0: The fitness function is unfit for modularity, too. That is because it weakens the inner connection which leads to recognize some structures unsuccessfully. For example, the complete graph and the ring structure are shown as Figure 5. They get the same value 1.0, but the structure is quite a bit different. In some subjects like biology or macromolecule, some special structure determines the function.
From the above discussion, in view of network density, we propose a new weighted modularity, which is composed of the density and cohesion. Definition 2. The new weighted modularity is defined as (3), in which the allocation parameter meets 0 < ≤ 0.5 and, for the community , is the number of inner nodes: The modularity of whole network is the average of each community. The research on several typical networks shows that the density of whole network is in a low level which is varying between 0.10 and 0.30. For example, Karate club network [1,2] is 0.139, Protein reaction network [3] is 0.290, and Dolphins interaction network [2] is 0.084. And here, the rule of parameter allocation obeys Pareto law that is twenty-eighty law. The cohesion gets the 80% weight, and the density gets the 20% weight. With the expansion on the size of networks, the links between nodes are weaker and weaker, and therefore the contribution of density is reduced in modularity.
Test on the parameter allocation reveals that when is getting 0.20, the minimum modularity of communities approaches 0.750 in the known networks. However, there is no regular pattern on the other allocation plan and the value is very discrete. As shown in Figure 6, the communities 1-3 are the Protein reaction network, and the communities 4 and 5 are the Karate club network. The detailed communities information of each cluster is shown in Table 1.
Set the allocation parameter as 0.20, and the modularity of communities are as follows.
Through analyzing the test in the known networks, it gets the minimum value of the community structure. In particular, the threshold is for the rough judgment which is not the only condition. In addition, after the node joining, the impacts on the original network, namely the smoothness of the modularity, need to be considered, namely the smoothness of the modularity. The modularity threshold is the last condition in judging, and the detailed introduction is given in next section.
Fast Overlapping Community Detection Algorithm with Self-Correcting Ability
The traditional community detection methods [3,[11][12][13]20] are visiting the nodes repeatedly. But after studying on many 4 The Scientific World Journal overlapping communities structure in some different types, we find out that the vast majority of nodes belong to only one community and the proportion of overlapping nodes is very small. For example, the proportion of overlapping nodes is 3/34 in Karate club network, 2/21 in Protein reaction network, 3/62 in Dolphins interaction network. Visiting the irrelevant nodes again and again results in the reduction of the algorithm efficiency. If we can distinguish the probable overlap nodes or the nonoverlapping nodes, the complexity can be cut down. The algorithm proposed in this paper is based on this idea. It labels the nodes by different property and removes the unrelated nodes to reduce the unnecessary visiting. The fast overlapping community detection algorithm with self-correcting ability is adopted in two stages. The first stage is the initial community discovery, and the second stage is the error detection and correction for specific nodes.
Raw Community Detection Algorithm.
Referring to the process of local modularity [18,21], our algorithm selects the root vertex firstly and visits the adjacent nodes in next layer. Then, our algorithm let the eligible nodes join the community and repeats those two steps until there are no qualified nodes. Some studies have verified that a majority of nodes belong to only one cluster in the networks. So, once the node has belonged to the community, it is unnecessary to be visited again. On the basis of this thought, we set each node with two attributes, which are isVisited and isLocated with default value false. They indicate whether a node is visited and located to the community and control the beginning root of the cluster and the range in the next access layer, respectively. Regulating the attributes of nodes, the algorithm proposed in this paper greatly reduces the amount of the adjacent nodes and cuts down the time complexity.
Raw community detection algorithm is composed of the following steps.
(1) Pick a node randomly whose isVisited attribute is false as the root of the community, and get the core of the original community.
(2) If the count of nodes in the community is greater than 3, install the community model and set the isVisited attribute to be true for all original nodes. Otherwise, set the isVisited of root vertex to be true; then return step 1.
(3) Get the adjacent nodes set whose isLocated attributes are false on the basis of parameter nodes which are going to access. And if the count is 0, go next; otherwise, turn to step 5.
(4) If the count of nodes in the community is not less than 5, check whether the isLocated attributes are true, and output the original community, and then return to step 1. Otherwise, return to step 1 directly.
(5) Access each node in the adjacent nodes set in turn, and set its isVisited attribute value is true.
(6) If a node meet the conditions, add it to the current community, and update the community model, and then put it to the next layer to access. Otherwise, return to step 5.
(7) If all the nodes are calculated, return to step 3 with the next layer nodes set.
Here, the conditions for a node joining the community are described as follows. Assume that the node joins the community, it meets one of the following conditions: (1) it brings gain in new modularity; (2) it gets addition in density and stable; (3) the rate linked with vertex in community is not less than threshold value (1/3); (4) the modularity is greater than threshold and stable. Theoretical research proves that random selection in nodes has nothing to do with the community structure of network. In other words, every node must belong to a certain community [3]. If root vertex cannot form the community, sign the isVisited attribute to be true, The Scientific World Journal 5 and go on to seek another available root vertex until the core is found.
In this progress, it builds the corresponding community model, which records the detailed information, such as the inner nodes and edges, and the outer edges. If a new node joins the community, it needs to update the community model immediately, which avoids the repetitive computation and lessens the complexity in time.
Moreover, the amount of expansion is not unlimited. From steps 3 to 7, even if there are coincident nodes every time, due to the six-degree theory [8], the diameter of the community is less than 6 and the cycle index is limited, too.
After discovering the community, the algorithm needs to confirm the isLocated attribute. The studies show that the threshold of most nodes' belonging factor in community is 2/3. But in this paper, the threshold of belonging factor is 3/4, and the rate of overlapping edges in all the adjacent edges is less than 1/4. Our algorithm takes a strict criterion to prevent missing the possible overlapping nodes, which is convenient for the error correction algorithm in the next step, just as the number 31 node in Karate club network.
Redistribution Algorithm for Unallocated Nodes.
Some studies on the known networks reveal that some nodes have accessed (isVisited = true) in the raw detection stage but failed to be assigned to any detected communities. The reasons are listed as follows: (1) high threshold modularity; (2) close connections among some nodes which form the structure like analogous triangle and any individual vertex cannot meet the requirements. For example, the number 25, number 26, and number 32 nodes are unable to form an independent cluster and need to join the community in union. So, it is necessary to execute the redistribution algorithm for unallocated nodes, which ensures every node belongs to cover.
The way to acquire the unallocated nodes set is calculating the subtraction between the beginning nodes and allocated nodes. For each unallocated node, the flow of the process is described as Algorithm 1.
Owing to the complexity of networks, there may be chain effect. For example, the adjacent nodes connected in sequence just form a chain. To deal with this case, the first node is allocated to the community, and the others are isolated nodes, which will not be participated in the next community detection.
Error Detection and Correction Algorithm for Specific
Nodes. The error detection and correction algorithm aims to recognize and check the overlapping nodes, which ensure the accuracy of the result. Here, the specific nodes are those whose isLocated attribute is false. In the process of initial community detection, the timing of nodes joining the community is different, and some core nodes are put in the cluster first. The other will not be identified completely because of missing the information of adjacent nodes. In the following redistribution process, the unallocated nodes decrease the membership value to the community, which may lead to wrongly label to other community. However, the isLocated attribute of wrong classified nodes is false, too. It is the minimum range to execute error correction algorithm on those nodes whose isLocated attribute is false. The researches on some known network show that the more evident the community structure is, the less the specific nodes are. In Karate club network, it is 11/34, in Protein reaction network it is 5/21, and in Dolphins interaction network it is 8/62.
In our algorithm, for every node, the procedure is described as follows.
(1) Get the adjacent communities list. If they exist, go next. Otherwise, go to step 5.
(2) Select an adjacent community, and verify whether the node meets the conditions to join the community (referring to distribution algorithm), and then decide to join or continue.
(3) If all the adjacent communities are tested, then check whether it is equal distribution. If it is, join each adjacent cluster. Otherwise, go next.
(4) If the type of joining community is equal distribution, return. Otherwise, continue to go.
(5) Get the belonging community set of the node. If the count is 1, return. Otherwise, go next.
(6) Choose an unverified community; recalculate the belonging factor which is linked with the community. If it is bigger than the threshold, return.
(7) Calculate the modularity variation when removing the node from the cluster. If it is positive, remove the node, return. Otherwise, directly return.
After the initial community detection, the information of nodes membership is completed mostly. The experiments reveal that some overlapping nodes interplay, which would change the node property. That is to say, a new joining node will bring its unallocated adjacent nodes to the same community, and the allocated nodes may be overlapping nodes and expand the overlap region, such as the number 10 and number 3. Therefore, the mutual linking nodes should be extracted, and the error detection and correction algorithm should be executed once again. It could clear up the possible wrong division, which makes the partition reasonable and steady.
Moreover, since the error detection and correction algorithm aims to the specific nodes, when expanding the nodes to the whole network, it is able to detect the validity of other community detection algorithms. If there is no change in node membership, they are just steady and accurate.
Original Bridgeness.
Community modularity is independent with the partition pattern, whether it is hard or crisp. But for the overlapping and nonoverlapping nodes, the roles are different in the network. It needs bridgeness [9] function to evaluate the position and importance of the overlap in the network, and the membership distribution is a major factor. The previous studies suggested that the sum of membership (1) communities = getCommunities(); / * get the communities of adjacent nodes * / (2) if (communities. Number == 0 then (3) / * if the number is 1 * / (4) Add(community); / * join the community * / (5) Return; (6) else (7) for communities do (8) / * for each community * / (9) if matchConditions then (10) / * judge whether node match the conditions * / (11) Add(community); / * join community * / (12) end if (13) end for (14) if averageDistribution() then (15) / * if it is equal istribution * / (16) AddToAll(communities); / * join each community * / (17) Return; (18) end if (19) if noAdd then (20) / * if it doesn't match the above * / (4), in which we assume that the number of belonging communities is : For each node, bridgeness means the degree of sharing in different communities. Nepusz et al. define that it is 0 when the node belongs to only one cluster. And it is 1.0 when sharing equally by the belonging communities. On the basis of membership vertex [ 1 , 2 , . . . , ] of overlapping node , after setting the uniform distribution [1/ , 1/ , . . . , 1/ ] as reference vertex, they define the bridgeness [9] as
Improved Bridgeness.
The distribution of belonging factors determines the status in network. The more approximate to the uniform distribution it is, the greater the effect is. However, the membership number is a positive and significant element. In addition, the degree of node itself is not an ignorable element. Obviously, Nepusz et al. overlook its own factors. If two nodes all conform to uniform distribution, the bridgeness is 1.0. So it cannot indicate the importance. Moreover, the node contribution of overlap can be positive in more than one community, and 1/ fails to express the actual average value of the uniform distribution. So, we set the average value as all the belonging factors. Considering the membership number is lesser than the degree of node, we introduce the improved bridgeness as follows synthetically.
Definition 4. Improved bridgeness is defined as follows, in which is the degree of node , and is the value of the actual uniform distribution: As shown in (6), the greater the membership number is, the higher the degree is, and the more similar to uniform distribution the membership vertex is. The sum of variance is less, and then the value of bridgeness is greater, which signifies the important standing in network. The detailed results of comparison and analysis are demonstrated in the next experiments.
Experiment Results and Analysis
In this section, we evaluate our algorithm with the community structures of three well-known networks, which, respectively, are Karate club network, Protein reaction network, and Dolphins interaction network. Karate club network [1,15] is the community structure network initially found The Scientific World Journal 7 by Newman, which represents the traditional social research network. Protein reaction network [3] is a network composed of protein metabolism, on behalf of the emerging biology research network, and Palla et al. find the overlapping characteristics of the community through it. Dolphins interaction network [15] is a network built according to interaction information of waters bottlenose dolphin living in New Zealand, which belongs to natural science research field, and many scholars take it as a research subject.
Karate Club Network.
Karate club network is the classic interpersonal relationship network, in which 34 members constitute 78 connections. Since the disagreements between members lead to divide into two pies, the network is divided into two distinct communities. In traditional hard classification model, the node can only belong to one community. However, after utilizing our algorithm on this network, we found that three nodes meet the criteria of overlapping node. Since overlap is an important characteristic of complex networks, through analyzing the network structure, the overlap model is more in accord with the actual situation.
In the error detection and correction algorithm, the set of the detected nodes is {"34", "3", "32", "9", "31", "28", "29", "26", "26", "25", "20", "10"}. Since the adjacency node has not been fully allocated in the process of the initial distribution, most of the nodes lack of adjacency information reference and are unable to determine isLocated = true. In view of the connected closely node {"3", "9", "10"}, the theoretical analysis has pointed out this problem that wrong classification may result in the change of the node properties. So, the error detection and correction algorithm should be performed again for such a node, which can eliminate the unreasonable factors and make the classification tend to be stable. Figure 7 is the final community structure found by our algorithm, which is in accordance with the standard distribution and includes the red solid nodes {"3", "9", "31"}, namely, the overlapping nodes. More information of overlapping nodes is shown in Table 2. Compared with number 9 and number 31, both of them belong to two communities. The variance of the former is less than number 31, and the degree is also greater. However, the original bridgeness of number 9 is smaller, which is irrational. And the improved bridgeness displays the difference in nodes, in accord with their own status in the network. As seen from the membership value, in Karate club network, the sum of overlap node membership degree is greater than 1.0, and each node can increase modularity for belonging community. In addition, the cumulative difference of the absolute value of the node 3 and node 31 membership vertex value and average distribution is less than or equal to a quarter, which are also in accord with the condition of average distribution. Therefore, it also verifies the principle and sequence of overlapping node determination conditions.
Protein Reaction Network.
Protein network is built according to metabolism response relationship between the biological protein, containing 21 nodes and 61 sides. It is a typical overlapping community network, the community structure of which is obvious. Through running our algorithm for finding original community, all of the communities are identified, and the nodes are all classified correctly, including high overlapping nodes. The redistribution algorithm needs not to run; thus our algorithm quickly found the overlapping community, which validates the efficiency of our algorithm.
Com3: "pph21" → "pph22", "tpd3", "rts3", "cdc55" → "zds1", "zds2". Figure 8 illustrates the three communities found by our algorithm, which is labeled by different shapes and colors. The structure is the same as the standard classification, and the solid red nodes {"zds1", "zds2"} are the overlapping nodes. The detailed information is shown in Table 3. In particular, 8 The Scientific World Journal Protein reaction network also demonstrates the irrationality of the original bridgeness. Zds1 node degree and community membership number are all greater than Zds2. But the original bridgeness is still lower than Zds2, and the difference of numerical value is very small, while our improved bridgeness avoids this defect, which is more in line with the node's position in the network community.
The detailed information of overlapping nodes is shown in Table 4. The set of found overlap nodes is {"19", "39", "7"}, in which {"19", "7"} are in accord with multiple overlap conditions. In addition, node 39 conforms to standard uniform distribution, fairly connecting the two communities. Through the comparison of the original and improved bridgeness, since the original bridgeness only considers the membership values, it is unreasonable distinctly, which is illustrated by node 39. initial community detection algorithm proposed in this paper is different and based on extracting the overlapping node features. Multiple conditions and multiple thresholds are constrained to find natural communities, which contains situations as follows: (1) the basic modularity increases; (2) density increases with stability; (3) the connection is greater than the community size threshold; (4) the modularity is greater than the specified threshold with stability; (5) the belonging factors are in accord with uniform distribution. These conditions form an access priority according to the judgement order, and high computational complexity would inspect in the final. Through the reasonable arrangement of the condition priority, it avoids the unnecessary calculation and reduces the complexity.
Operating Efficiency of the Algorithm.
In the process of our algorithm running, through controlling the attribute of isLocated, it gradually shrinks the expansion space of the adjacent available nodes and cuts down the repeated access of the nodes, which improve the operation efficiency of our algorithm. For nonoverlapping nodes, theoretical visit is only one time. For the possible overlapping nodes, it only needs to visit and verify the relevant adjacent communities. The attributes are to classify nodes, and it can avoid the repeated visit and compute for the irrespective node effectively.
Strategy of the Algorithm.
In the process of discovering natural communities, by establishing community model and updating community information in real time, it avoids the repeated compute community information when the nodes join the community. Most of the time is spent in the process of finding natural community. The subsequent error detection and correction algorithm is a supplement. Since the involved node is less, the complexity of the calculation is lower than community discovery process. So the overall complexity is approximate linear. It shows the node proportion information in each stage during our algorithm running in Table 5.
In each period of our whole algorithm, the core is the raw community detection algorithm, which detects the main areas of communities. It determines the number of network partition, and affects the efficiency of the algorithm. Compared with the data in the network, the vast majority of nodes in the community are nonoverlap. The overlap nodes and nonoverlap nodes can be identified by the attribute of isLocated, which greatly reduces the repeated visit and calculation to the irrelevant nodes, and the algorithm enables to detect communities rapidly. In Figure 10, more than 80% nodes have been confirmed in the stage of raw community detection. In addition, the more obvious the community structure in the network is, the smaller the proportion of nodes need to redistribute is. Since the number of nodes in error detection and correction stage is limited and the community structure has been clear, it only needs to verify the specific nodes, which do not access large-scale adjacent nodes. So the error detection and correction algorithm has less effect on the increase with overall complexity.
Conclusions
In this paper, we first put forward the new features of modularity and also show the advantage of the new weighted modularity in structure, which is based on cohesive and density synthetically. The experiments are conducted on the classical networks with well-known community structure, which explore the distribution of the parameter factor. In addition, according to the proportion of overlap in the community, we present a fast overlapping community algorithm with self-correction by setting the nodes with the attributes of isVisited and isLocated, which consists of two stages: (1) the initial community detection algorithm and (2) the error detection and correction algorithm. We also propose an improved bridgeness function to evaluate the extent of overlapping nodes. The experimental results demonstrate that our algorithm is good for the expansion of discovery algorithm when extracting the overlap features. Although our algorithm is already effective, but the later work can be expanded in more different types of networks, to test out appropriate parameter and conclude parameter distribution principle. In addition, the threshold setting in the overlapping node conditions, such as in the modularity, stationarity, and the close connection, is strict in algorithm, which expands the scope of nodes in error detection and correction slightly. However, finding and extracting the new features of overlapping node are the directions in the next step. Through the experiments on the existing network, our algorithm can be applied to large-scale networks in the future. | 7,131.6 | 2014-03-13T00:00:00.000 | [
"Computer Science"
] |
Single Document Summarization as Tree Induction
In this paper, we conceptualize single-document extractive summarization as a tree induction problem. In contrast to previous approaches which have relied on linguistically motivated document representations to generate summaries, our model induces a multi-root dependency tree while predicting the output summary. Each root node in the tree is a summary sentence, and the subtrees attached to it are sentences whose content relates to or explains the summary sentence. We design a new iterative refinement algorithm: it induces the trees through repeatedly refining the structures predicted by previous iterations. We demonstrate experimentally on two benchmark datasets that our summarizer performs competitively against state-of-the-art methods.
Introduction
Single-document summarization is the task of automatically generating a shorter version of a document while retaining its most important information.The task has received much attention in the natural language processing community due to its potential for various information access applications.Examples include tools which digest textual content (e.g., news, social media, reviews), answer questions, or provide recommendations.
Of the many summarization paradigms that have been identified over the years (see Mani 2001 andNenkova andMcKeown 2011 for comprehensive overviews), two have consistently attracted attention.In abstractive summarization, various text rewriting operations generate summaries using words or phrases that were not in the original text, while extractive approaches form summaries by copying and concatenating the most important spans (usually sentences) in a document.Recent approaches to (single-document) extractive summarization frame the task as a sequence labeling problem taking advantage of the success of neural network architectures (Bahdanau et al., 2015).The idea is to predict a label for each sentence specifying whether it should be included in the summary.Existing systems mostly rely on recurrent neural networks (Hochreiter and Schmidhuber, 1997) to model the document and obtain a vector representation for each sentence (Nallapati et al., 2017;Cheng and Lapata, 2016).Intersentential relations are captured in a sequential manner, without taking the structure of the document into account, although the latter has been shown to correlate with what readers perceive as important in a text (Marcu, 1999).Another problem in neural-based extractive models is the lack of interpretability.While capable of identifying summary sentences, these models are not able to rationalize their predictions (e.g., a sentence is in the summary because it describes important content upon which other related sentences elaborate).
The summarization literature offers examples of models which exploit the structure of the underlying document, inspired by existing theories of discourse such as Rhetorical Structure Theory (RST; Mann and Thompson 1988).Most approaches produce summaries based on tree-like document representations obtained by a parser trained on discourse annotated corpora (Carlson et al., 2003;Prasad et al., 2008).For instance, Marcu (1999) argues that a good summary can be generated by traversing the RST discourse tree structure top-down, following nucleus nodes (discourse units in RST are characterized regarding their text importance; nuclei denote central units, whereas satellites denote peripheral ones).Other work (Hirao et al., 2013;Yoshida et al., 2014) extends this idea by transforming RST trees into dependency trees and generating summaries by tree trimming.Gerani et al. (2014) summarize product reviews; their system aggregates RST trees rep-1.
One wily coyote traveled a bit too far from home, and its resulting adventure through Harlem had alarmed residents doing a double take and scampering to get out of its way Wednesday morning.
2.
Police say frightened New Yorkers reported the coyote sighting around 9:30 a.m., and an emergency service unit was dispatched to find the animal.
3.
The little troublemaker was caught and tranquilized in Trinity Cemetery on 155th street and Broadway, and then taken to the Wildlife Conservation Society at the Bronx Zoo, authorities said.4.
"The coyote is under evaluation and observation," said Mary Dixon, spokesperson for the Wildlife Conservation Society.
5.
She said the Department of Environmental Conservation will either send the animal to a rescue center or put it back in the wild.6.
According to Adrian Benepe, New York City Parks Commissioner, coyotes in Manhattan are rare, but not unheard of.7.
"This is actually the third coyote that has been seen in the last 10 years," Benepe said.8.
Benepe said there is a theory the coyotes make their way to the city from suburban Westchester.9.
He said they probably walk down the Amtrak rail corridor along the Hudson River or swim down the Hudson River until they get to the city.resenting individual reviews into a graph, from which an abstractive summary is generated.Despite the intuitive appeal of discourse structure for the summarization task, the reliance on a parser which is both expensive to obtain (since it must be trained on labeled data) and error prone, presents a major obstacle to its widespread use.
Recognizing the merits of structure-aware representations for various NLP tasks, recent efforts have focused on learning latent structures (e.g., parse trees) while optimizing a neural network model for a down-stream task.Various methods impose structural constraints on the basic attention mechanism (Kim et al., 2017;Liu and Lapata, 2018), formulate structure learning as a reinforcement learning problem (Yogatama et al., 2017;Williams et al., 2018), or sparsify the set of possible structures (Niculae et al., 2018).Although latent structures are mostly induced for individual sentences, Liu and Lapata (2018) induce dependency-like structures for entire documents.
Drawing inspiration from this work and existing discourse-informed summarization models (Marcu, 1999;Hirao et al., 2013), we frame extractive summarization as a tree induction problem.Our model represents documents as multiroot dependency trees where each root node is a summary sentence, and the subtrees attached to it are sentences whose content is related to and cov-ered by the summary sentence.An example of a document and its corresponding tree is shown in Figure 1; tree nodes correspond to document sentences; blue nodes represent those which should be in the summary, dependent nodes relate to or are subsumed by the parent summary sentence.
We propose a new framework that uses structured attention (Kim et al., 2017) as both the objective and attention weights for extractive summarization.Our model is trained end-to-end, it induces document-level dependency trees while predicting the output summary, and brings more interpretability in the summarization process by helping explain how document content contributes to the model's decisions.We design a new iterative structure refinement algorithm, which learns to induce document-level structures through repeatedly refining the trees predicted by previous iterations and allows the model to infer complex trees which go beyond simple parent-child relations (Liu and Lapata, 2018;Kim et al., 2017).The idea of structure refinement is conceptually related to recently proposed models for solving iterative inference problems (Marino et al., 2018;Putzky and Welling, 2017;Lee et al., 2018).It is also related to structured prediction energy networks (Belanger et al., 2017) which approach structured prediction as iterative miminization of an energy function.However, we are not aware of any previous work considering structure refinement for tree induction problems.
Our contributions in this work are three-fold: a novel conceptualization of extractive summarization as a tree induction problem; a model which capitalizes on the notion of structured attention to learn document representations based on iterative structure refinement; and large-scale evaluation studies (both automatic and human-based) which demonstrate that our approach performs competitively against state-of-the-art methods while being able to rationalize model predictions.
Model Description
Let d denote a document containing several sentences [sent 1 , sent 2 , • • • , sent m ], where sent i is the i-th sentence in the document.Extractive summarization can be defined as the task of assigning a label y i ∈ {0, 1} to each sent i , indicating whether the sentence should be included in the summary.It is assumed that summary sentences represent the most important content of the document.
Baseline Model
Most extractive models frame summarization as a classification problem.Recent approaches (Zhang et al., 2018;Dong et al., 2018;Nallapati et al., 2017;Cheng and Lapata, 2016) incorporate a neural network-based encoder to build representations for sentences and apply a binary classifier over these representations to predict whether the sentences should be included in the summary.Given predicted scores r and gold labels y, the loss function can be defined as: The encoder in extractive summarization models is usually a recurrent neural network with Long-Short Term Memory (LSTM; Hochreiter and Schmidhuber 1997) or Gated Recurrent Units (GRU; Cho et al. 2014).In this paper, our baseline encoder builds on the Transformer architecture (Vaswani et al., 2017), a recently proposed highly efficient model which has achieved state-of-the-art performance in machine translation (Vaswani et al., 2017) and question answering (Yu et al., 2018).The Transformer aims at reducing the fundamental constraint of sequential computation which underlies most architectures based on RNNs.It eliminates recurrence in favor of applying a self-attention mechanism which directly models relationships between all words in a sentence.
More formally, given a sequence of input vectors {x 1 , x 2 , • • • , x n }, the Transformer is composed of a stack of N identical layers, each of which has two sub-layers: where For our extractive summarization task, the baseline system is composed of a sentence-level Transformer (T S ) and a document-level Transformer (T D ), which have the same structure.For each sentence in the input document, T S is applied to obtain a contextual representation for each word: And the representation of a sentence is acquired by applying weighted-pooling: Document-level transformer T D takes s i as input and yields a contextual representation for each sentence: Following previous work (Nallapati et al., 2017), we use a sigmoid function after a linear transformation to calculate the probability r i of selecting s i as a summary sentence:
Structured Summarization Model
In the Transformer model sketched above, intersentence relations are modeled by multi-head attention based on softmax functions, which only capture shallow structural information.Our summarizer, which we call SUMO as a shorthand for Structured Summarization Model classifies sentences as summary-worthy or not, and simultaneously induces the structure of the source document as a multi-root tree.An overview of SUMO is illustrated in Figure 2. The model has the same sentence-level encoder T S as the baseline Transformer model (see the bottom box in Figure 2), but differs in two important ways: (a) it uses structured attention to model the roots (i.e., summary sentences) of the underlying tree (see the upper box in Figure 2); and (b) through iterative refinement it is able to progressively infer more complex structures from past guesses (see the second and third block in Figure 2).Structured Attention Assuming document sentences have been already encoded, SUMO first calculates the unnormalized root score ri for sent i to indicate the extent to which it might be selected as root in the document tree.It also calculates the unnormalized edge score ẽij for sentence pair sent i , sent j indicating the extent to which sent i might be the head of sent j in that tree (first upper block in Figure 2).To inject structural bias, SUMO normalizes these scores as the marginal probabilities of forming edges in the document dependency tree.
We use the Tree-Matrix-Theorem (TMT; Koo et al. 2007;Tutte 1984) to calculate root marginal probability r i and edge marginal probability e ij , following the procedure introduced in Liu and Lapata (2017).As illustrated in Algorithm 1, we first build the Laplacian matrix L based on unnormalized scores and calculate marginal probabilities by matrix inverse-based operations ( L−1 ).We refer the interested reader to Koo et al. (2007) and Liu and Lapata (2017) for more details.In contrast to Liu and Lapata (2017), who compute the marginal probabilities of a single-root tree, our tree has multiple roots since in our task the summary typically contains multiple sentences.Given sentence vector s i as input, SUMO computes: ) Iterative Structure Refinement SUMO essentially reduces summarization to a rooted-tree parsing problem.However, accurately predicting a tree in one shot is problematic.Firstly, when predicting the dependency tree, the model has solely Algorithm 1: Calculate Tree Marginal Probabilities based on Tree-Matrix-Theorem Function TMT(r i , ẽij )l: access to labels for the roots (aka summary sentences), while tree edges are latent and learned without an explicit training signal.And as previous work (Liu and Lapata, 2017) has shown, a single application of TMT leads to shallow tree structures.Secondly, the calculation of ri and ẽij would be based on first-order features alone, however, higher-order information pertaining to siblings and grandchildren has proved useful in discourse parsing (Carreras, 2007).
We address these issues with an inference algorithm which iteratively infers latent trees.In contrast to multi-layer neural network architectures like the Transformer or Recursive Neural Networks (Tai et al., 2015) where word representations are updated at every layer based on the output of previous layers, we refine only the tree structure during each iteration, word representations are not passed across multiple layers.Empirically, at early iterations, the model learns shallow and simple trees, and information propagates mostly between neighboring nodes; as the structure gets more refined, information propagates more globally allowing the model to learn higher-order features.
Algorithm 2 provides the details of our refinement procedure.SUMO takes K iterations to learn the structure of a document.For each sentence, we initialize a structural vector v 0 i with sentence vector s i .At iteration k, we use sentence embeddings from the previous iteration v k−1 to calculate unnormalized root rk i and edge ẽk ij scores using a linear transformation with weight W k r and a bilinear transformation with weight W k e , respectively.Marginal root and edge probabilities are subsequently normalized with the TMT to obtain r k i and e k ij (see lines 4-6 in Algorithm 2).Then, sentence embeddings are updated with k-Hop Propagation.The latter takes as input the initial sentence representations s rather than sentence embeddings v k−1 from the previous layer.In other words, new embeddings v k are computed from scratch relying on the structure from the previous layer.Within the k-Hop-Propagation function (lines 12-19), edge probabilities e k ij are used as attention weights to propagate information from a sentence to all other sentences in k hops.p l i and c l i represent parent and child vectors, respectively, while vector z l i is updated with contextual information at hop l.At the final iteration (lines 9 and 10), the top sentence embeddings v K−1 are used to calculate the final root probabilities r K .
We define the model's loss function as the summation of the losses of all iterations: SUMO uses the root probabilities of the top layer as the scores for summary sentences.
The k-Hop-Propagation function resembles the computation used in Graph Convolution Networks (Kipf and Welling, 2017; Marcheggiani and Titov, 2017).GCNs have been been recently applied to latent trees (Corro and Titov, 2019), however not in combination with iterative refinement.
Experiments
In this section we present our experimental setup, describe the summarization datasets we used, discuss implementation details, our evaluation protocol, and analyze our results.
Calculate unnormalized edge scores: Calculate marginal probabilities: r k , e k = TMT(r k , ẽk ) Update sentence representations: v k = k-Hop-Propagation(e k , s, k) 8 end 9 Calculate final unnormalized root and edge scores: 10 Calculate final root and edge probabilities: r K , e K = TMT(r K , ẽK ) 12 Function k-Hop-Propagation(e, s, k): is all articles published on January 1, 2007 or later).We also followed their filtering procedure, documents with summaries that are shorter than 50 words were removed from the raw dataset.The Both datasets contain abstractive gold summaries, which are not readily suited to training extractive summarization models.A greedy algorithm similar to Nallapati et al. (2017) was used to generate an oracle summary for each document.The algorithm explores different combinations of sentences and generates an oracle consisting of multiple sentences which maximize the ROUGE score with the gold summary.We assigned label 1 to sentences selected in the oracle summary and 0 otherwise and trained SUMO on this data.
Implementation Details
We followed the same training procedure for SUMO and various Transformer-based baselines.The vocabulary size was set to 30K.We used 300D word embeddings which were initialized randomly from N (0, 0.01).The sentence-level Transformer has 6 layers and the hidden size of FFN was set to 512.The number of heads in MHAtt was set to 4. Adam was used for training (β 1 = 0.9, β 2 = 0.999).We adopted the learning rate schedule from Vaswani et al. (2017) with warming-up on the first 8,000 steps.SUMO and related Transformer models produced 3-sentence summaries for each document at test time (for both CNN/DailyMail and NYT datasets).
Automatic Evaluation
We evaluated summarization quality using ROUGE F 1 (Lin, 2004).We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency.
Table 1 summarizes our results.We evaluated two variants of SUMO, with one and three structured-attention layers.We compared against a baseline which simply selects the first three sentences in each document (LEAD-3) and several incarnations of the basic Transformer model introduced in Section 2.1.These include a Transformer without document-level self-attention and two variants with document-level self attention instantiated with one and three layers.Several stateof-the-art models are also included in Table 1, both extractive and abstractive.
REFRESH (Narayan et al., 2018) is an extractive summarization system trained by globally optimizing the ROUGE metric with reinforcement learning.The system of Marcu (1999) is another extractive summarizer based on RST parsing.It uses discourse structures and RST's notion of nuclearity to score document sentences in terms of their importance and selects the most important ones as the summary.Our re-implementation of Marcu (1999) used the parser of Zhao and Huang (2017) to obtain RST trees.Durrett et al. (2016) develop a summarization system which integrates a compression model that enforces grammaticality and coherence.See et al. (2017) present an abstractive summarization system based on an encoder-decoder architecture.Celikyilmaz et al.'s (2018) system is state-of-the-art in abstractive summarization using multiple agents to represent the document as well a hierarchical attention mechanism over the agents for decoding.
As far as SUMO is concerned, we observe that it outperforms a simple Transformer model without any document attention as well as variants with document attention.SUMO with three layers of structured attention overall performs best, confirming our hypothesis that document-level structure is beneficial for summarization.The results in Table 1 also reveal that SUMO and all Transformer-based models with document attention (doc-att) outperform LEAD-3 across metrics.SUMO (3-layer) is competitive or better than stateof-the-art approaches.Examples of system output are shown in Table 4.
Finally, we should point out that SUMO is superior to Marcu (1999) even though the latter employs linguistically informed document representations.
Human Evaluation
In addition to automatic evaluation, we also assessed system performance by eliciting human judgments.Our first evaluation quantified the degree to which summarization models retain key information from the document following a question-answering (QA) paradigm (Clarke and Lapata, 2010;Narayan et al., 2018).We created a set of questions based on the gold summary under the assumption that it highlights the most important document content.We then examined whether participants were able to answer these questions by reading system summaries alone without access to the article.The more questions a system can answer, the better it is at summarizing the document as a whole.
We randomly selected 20 documents from the CNN/DailyMail and NYT datasets, respectively and wrote multiple question-answer pairs for each gold summary.We created 71 questions in total varying from two to six questions per gold summary.We asked participants to read the summary and answer all associated questions as best they could without access to the original document or the gold summary.Examples of questions and their answers are given in Table 4.We adopted the same scoring mechanism used in Clarke and Lapata (2010) with a score of one, partially correct answers with a score of 0.5, and zero otherwise.Answers were elicited using Amazon's Mechanical Turk platform.Participants evaluated summaries produced by the LEAD-3 baseline, our 3-layered SUMO model and multiple state-of-the-art systems.We elicited 5 responses per summary.Table 2 (QA column) presents the results of the QA-based evaluation.Based on the summaries generated by SUMO, participants can answer 65.3% of questions correctly on CNN/DailyMail and 57.2% on NYT.Summaries produced by LEAD-3 and comparison systems fare worse, with REFRESH (Narayan et al., 2018) coming close to SUMO on CNN/DailyMail but not on NYT.Overall, we observe there is room for improvement since no system comes close to the extractive oracle, indicating that improved sentence selection would bring further performance gains to extractive approaches.Between-systems differences are all statistically significant (using a one-way ANOVA with posthoc Tukey HSD tests; p < 0.01) with the exception of LEAD-3 and See et al. (2017) in both CNN+DM and NTY, Narayan et al. (2018) and SUMO in both CNN+DM and NTY, and LEAD-3 and Durrett et al. (2016) in NYT.
Our second evaluation study assessed the overall quality of the summaries by asking participants to rank them taking into account the following criteria: Informativeness , Fluency, and Succinctness.The study was conducted on the Amazon Mechanical Turk platform using Best-Worst Scaling (Louviere et al., 2015), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2017).Participants were presented with a document and CNN+DM NYT P H EA P H EA Parser 24.8 8.9 -18.7 10.6 -SUMO (1-layer) 69.0 2.9 23.1 54.7 3.6 20.6 SUMO (3-layer) 52.7 3.7 25.3 45.1 6.2 21.6 Left Branching --21.4 --21.3Right Branching --7.3 --6.7 Table 3: Descriptive statistics (Projectivity (%), Height and EdgeAgreement (%)) for dependency trees produced by our model and the RST discourse parser of Zhao and Huang (2017).Results are shown on the CNN/DailyMail and NYT test sets.
summaries generated from 3 out of 7 systems and were asked to decide which summary was better and which one was worse, taking into account the criteria mentioned above.We used the same 20 documents from each dataset as in our QA evaluation and elicited 5 responses per comparison.The rating of each system was computed as the percentage of times it was chosen as best minus the times it was selected as worst.Ratings range from -1 (worst) to 1 (best).As shown in Table 2 (Rank column), participants overwhelming prefer the extractive oracle summaries followed by SUMO and REFRESH (Narayan et al., 2018).Abstractive systems (Celikyilmaz et al., 2018;See et al., 2017;Durrett et al., 2016) perform relatively poorly in this evaluation; we suspect that humans are less forgiving to fluency errors and slightly incoherent summaries.Interestingly, gold summaries fare worse than the oracle and extractive systems.Albeit fluent, gold summaries naturally contain less detail compared to oracle-based ones; on virtue of being abstracts, they are written in a telegraphic style, often in conversational language while participants prefer the more lucid style of the extracts.All pairwise comparisons among systems are statistically significant (using a one-way ANOVA with post-hoc Tukey HSD tests; p < 0.01) except LEAD-3 and See et al. (2017) in both CNN+DM and NTY, Narayan et al. (2018) andSUMO in both CNN+DM andNTY, andLEAD andDurrett et al. (2016) in NYT.
Evaluation of the Induced Structures
To gain further insight into the structures learned by SUMO, we inspected the trees it produces.Specifically, we used the Chu-Liu-Edmonds algorithm (Chu and Liu, 1965;Edmonds, 1967) to extract the maximum spanning tree from the atten-tion scores.We report various statistics on the characteristics of the induced trees across datasets in Table 3.We also examine the trees learned from different SUMO variants (with different numbers of iterations) in order to establish whether the iterative process yields better structures.Specifically, we compared the dependency trees obtained from our model to those produced by a discourse parser (Zhao and Huang, 2017) trained on a corpus which combines annotations from the RST treebank (Carlson et al., 2003) and the Penn Treebank (Marcus et al., 1993).Unlike traditional RST discourse parsers (Feng and Hirst, 2014), which first segment a document into Elementary Discourse Units (EDUs) and then build a discourse tree with the EDUs2 as leaves, Zhao and Huang (2017) parse a document into an RST tree along with its syntax subtrees without segmenting it into EDUs.The outputs of their parser are ideally suited for comparison with our model, since we only care about document-level structures, and ignore the subtrees within sentence boundaries.We converted the constituency RST trees obtained from the discourse parser into dependency trees using Hirao et al.'s algorithm (2013).
As can be seen in Table 3, the dependency structures induced by SUMO are simpler compared to those obtained from the discourse parser.Our trees are generally shallower, almost half of them are projective.We also calculated the percentage of head-dependency edges that are identical between learned trees and parser generated ones.Although SUMO is not exposed to any annotated trees during training, a number of edges agree with the outputs of the discourse parser.Moreover, we observe that the iterative process involving multiple structured attention layers helps generate better discourse trees.We also compare SUMO trees against a left-and right-branching baseline, where the document is trivially parsed into a left-and right-branching tree forming a chain-like structure.As shown in Table 3, SUMO outperforms these baselines (with the exception of the onelayered model on NYT).We should also point out that the edge agreement between SUMO generated trees and left/right branching trees is low (around 30% on both datasets), indicating that the trees we learn are different from a simple chain.Shortfall is projected to be $2.9 billion.
QA
Which company specializes in digital preservation of threatened ancient and historical architecture?
[CyArk] How many World Heritage sites does the company plan to preserve?[500] What is Road Home? [the Louisiana grant program for homeowners who lost their houses to hurricanes Katrina and Rita] When is the applicants' deadline for the Road Home?
[July 31] Why is the program expected to cost far more than $7.5 billion?[many more families have applied than officials anticipated] What is the shortfall projected to be?
[$2.9 billion] LEAD-3 In 2001, the Taliban wiped out 1700 years of history in a matter of seconds, by blowing up ancient Buddha statues in central Afghanistan with dynamite.They proceeded to do so after an attempt at bringing down the 175-foot tall sculptures with anti-aircraft artillery had failed.Sadly, the event was just the first in a series of atrocities that have robbed the world of some of its most prized cultural heritage.
The Road Home, the Louisiana grant program for homeowners who lost their houses to hurricanes Katrina and Rita, is expected to cost far more than the $7.5 billion provided by the Federal Government, in part because many more families have applied than officials had anticipated.As a result, Louisiana officials on Tuesday night set a July 31 deadline for applicants, who can receive up to $150,000 to repair or rebuild their houses.
With the cutoff date, the State hopes to be able to figure out how much more money it needs to pay for the program.
See et al. (2017)
The Taliban wiped out 1700 years of history in a matter of seconds.
The thought of losing a piece of our collective history is a bleak one.But if loss can't be avoided, technology can lend a hand.
Louisiana grant program for homeowners who lost their houses to hurricanes Katrina and Rita is expected to cost far more than $7.5 billion provided by federal government.Louisiana officials set July 31 deadline for applicants, who can receive up to $150,000 to repair or rebuild their houses.Narayan et al. (2018) Sadly, the event was just the first in a series of atrocities that have robbed the world of some of its most prized cultural heritage.But historical architecture is also under threat from calamities which might well escape our control, such as earthquakes and climate change.
The thought of losing a piece of our collective history is a bleak one.
The Road Home, the Louisiana grant program for homeowners who lost their houses to hurricanes Katrina and Rita, is expected to cost far more than the $7.5 billion provided by the federal government, in part because many more families have applied than officials had anticipated.
With the cutoff date, the State hopes to be able to figure out how much more money it needs to pay for the program.
The shortfall is projected to be $2.9 billion.
SUMO
In 2001, the Taliban wiped out 1700 years of history in a matter of seconds, by blowing up ancient Buddha statues in central Afghanistan with dynamite.Sadly, the event was just the first in a series of atrocities that have robbed the world of some of its most prized cultural heritage.Now Cyark, a non-profit company founded by an Iraqi-born engineer, is using groundbreaking laser scanning to ensure that -at the very least -incredibly accurate digital versions of the world's treasures will stay with us forever.
The Road Home, the Louisiana grant program for homeowners who lost their houses to hurricanes Katrina and Rita, is expected to cost far more than the $7.5 billion provided by the federal government, in part because many more families have applied than officials had anticipated.As a result, Louisiana officials on Tuesday night set a July 31 deadline for applicants, who can receive up to $150,000 to repair or rebuild their houses.
The shortfall is projected to be $2.9 billion.
Table 4: GOLD human authored summaries, questions based on them (answers shown in square brackets) and automatic summaries produced by the LEAD-3 baseline, the abstractive system of See et al. (2017), REFRESH (Narayan et al., 2018), and SUMO for a CNN and NYT (test) article.
Conclusions
In this paper we provide a new perspective on extractive summarization, conceptualizing it as a tree induction problem.We present SUMO, a Structured Summarization Model, which induces a multi-root dependency tree of a document, where roots are summary-worthy sentences, and subtrees attached to them are sentences which elaborate or explain the summary content.SUMO generates complex trees following an iterative refinement process which builds latent structures while using information learned in previous iterations.Experiments on two datasets, show that SUMO performs competitively against state-of-the-art methods and induces meaningful tree structures.
In the future, we would like to generalize SUMO to abstractive summarization (i.e., to learn latent structure for documents and sentences) and perform experiments in a weakly-supervised setting where summaries are not available but labels can be extrapolated from the article's title or topics.
and PosEmb is the function of adding positional embeddings to the input; the superscript l indicates layer depth; LayerNorm is the layer normalization operation proposed in Ba et al. (2016); MHAtt represents the multi-head attention mechanism introduced in Vaswani et al. (2017) which allows the model to jointly attend to information from different representation subspaces (at different positions); and FFN is a two-layer feed-forward network with ReLU as hidden activation function.
Figure 2 :
Figure 2: Overview of SUMO.A Transformer-based sentence-level encoder (yellow box) builds a vector for each sentence.The blue box presents the document-level encoder; red lines indicate iterative application of structured attention, where at each iteration the model outputs a roots distribution and the extractive loss is calculated based on gold summary sentences.si indicates the initial representation for senti; v k i Algorithm 2: Structured Summarization Model Input: Document d Output: Root probabilities r K after K iterations 1 Calculate sentence vectors s using sentence-level Transformer T S 2 v 0 ← s 3 for k ← 1 to K − 1 do 4Calculate unnormalized root scores:
Table 1 :
Test set results on the CNN/DailyMail and NYT datasets using ROUGE F 1 (R-1 and R-2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence.
Table 2 :
, i.e., a correct answer was marked System ranking according to human judgments on summary quality and QA-based evaluation.
CyArk specializes in digital preservation of threatened ancient and historical architecture.Founded by an Iraqi-born engineer, it plans to preserve 500 World Heritage sites within five years.Louisiana officials set July 31 deadline for applicants for the Road Home, grant program for homeowners who lost their houses to hurricanes Katrina and Rita.Program is expected to cost far more than $7.5 billion provided by Federal Government, in part because many more families have applied than officials anticipated.With cutoff date, State hopes to figure out how much more money it needs to pay for program. | 7,460.6 | 2019-06-07T00:00:00.000 | [
"Computer Science"
] |
Diabetes and Cancer
Diabetes and cancer are two common conditions. Many epidemiological studies suggest frequent co-occurrence of diabetes and cancer. Meta-analyses and the summary of recom‐ mendations from the American Diabetes Association (ADA) and American Cancer Society suggest association of cancer and diabetes including liver, pancreas, endometrial, colorectal, breast and bladder cancers.[1] Diabetes appears to protect against prostate cancer based on decreased incidence of prostate cancer in subjects with diabetes. Lung cancer appears not to be associated with diabetes, and data is inconclusive for renal cell cancer and lymphoma.[1] Most of the association data is on the relation of cancer to type 2 diabetes. The major concern is that type 2 diabetes is associated with three of the five leading causes of cancer mortality such as carcinoma of the colon [2], pancreas [3] and breast (postmenopausal) [4]. The excess risk for each cancer is ~30% (colon), ~50% (pancreas) and ~20% (breast). The majority of the epidemiological data on cancer incidence and mortality had been obtained in type 2 diabetic patients. A cohort study to examine cancer incidence among 29,187 patients in Sweden who were hospitalized for type 1 diabetes from 1965 through 1999, observed 355 incident cases of cancer and which corresponded to a 20% increase in overall cancer incidence among type 1 diabetes patients (RR:1.2; CI: 1.0 to 1.3) [5]. Patients with type 1 diabetes had elevated risks of cancers of the stomach (RR: 2.3; CI: 1.1 to 4.1), cervix (R: 1.6; CI: 1.1 to 2.2), and endometrium (RR: 2.7; CI: 1.4 to 4.7) [5]. Hyperinsulinemia most likely favors cancer in diabetic patients as insulin is a growth factor with pre-eminent metabolic as well as mitogenic effects. Insulin action in malignant cells is favored by mechanisms acting at both the receptor and postreceptor level. Obesity, hyperglycemia, and increased oxidative stress may also contribute to increased cancer risk in diabetes [6]. There are reports of concern of hypoglycemic therapies on cancer risk, especially with insulin analogue-Glargine. A growing body of evidence suggests that metformin potentially reduces the risk of cancer. Aspirin and non-aspirin nonsteroidal anti-inflammatory drugs appear to reduce recurrence of adenomas and incidence of
Introduction
Diabetes and cancer are two common conditions.Many epidemiological studies suggest frequent co-occurrence of diabetes and cancer.Meta-analyses and the summary of recommendations from the American Diabetes Association (ADA) and American Cancer Society suggest association of cancer and diabetes including liver, pancreas, endometrial, colorectal, breast and bladder cancers.[1] Diabetes appears to protect against prostate cancer based on decreased incidence of prostate cancer in subjects with diabetes.Lung cancer appears not to be associated with diabetes, and data is inconclusive for renal cell cancer and lymphoma.[1] Most of the association data is on the relation of cancer to type 2 diabetes.The major concern is that type 2 diabetes is associated with three of the five leading causes of cancer mortality such as carcinoma of the colon [2], pancreas [3] and breast (postmenopausal) [4].The excess risk for each cancer is ~30% (colon), ~50% (pancreas) and ~20% (breast).The majority of the epidemiological data on cancer incidence and mortality had been obtained in type 2 diabetic patients.A cohort study to examine cancer incidence among 29,187 patients in Sweden who were hospitalized for type 1 diabetes from 1965 through 1999, observed 355 incident cases of cancer and which corresponded to a 20% increase in overall cancer incidence among type 1 diabetes patients (RR:1.2;CI: 1.0 to 1.3) [5].Patients with type 1 diabetes had elevated risks of cancers of the stomach (RR: 2.3; CI: 1.1 to 4.1), cervix (R: 1.6; CI: 1.1 to 2.2), and endometrium (RR: 2.7; CI: 1.4 to 4.7) [5].Hyperinsulinemia most likely favors cancer in diabetic patients as insulin is a growth factor with pre-eminent metabolic as well as mitogenic effects.Insulin action in malignant cells is favored by mechanisms acting at both the receptor and postreceptor level.Obesity, hyperglycemia, and increased oxidative stress may also contribute to increased cancer risk in diabetes [6].There are reports of concern of hypoglycemic therapies on cancer risk, especially with insulin analogue-Glargine.A growing body of evidence suggests that metformin potentially reduces the risk of cancer.Aspirin and non-aspirin nonsteroidal anti-inflammatory drugs appear to reduce recurrence of adenomas and incidence of advanced adenomas in individuals with an increased risk of colorectal adenomas and colorectal cancer, and calcium may reduce recurrence of adenomas [7,8].In this chapter we will include epidemiological evidence of association of diabetes and cancer, possible mechanisms and the effect of hypoglycemic agents in relation to cancer.
Epidemiology of diabetes and cancer risk
The Centers for Disease Control and Prevention (CDC) reports that 25.8 million people (8.3% of the U.S. population) have diabetes.Among them, 18.8 million people have diagnosed diabetes and 7.0 million people have undiagnosed diabetes.http://www.cdc.gov/diabetes/pubs/estimates11.htm#1.Three hundred and forty six million people worldwide have diabetes.Type 2 diabetes comprises 90% of people with diabetes around the world, and is largely the result of excess body weight and physical inactivity.http://www.who.int/mediacentre/factsheets/fs312/en/index.html.The incidence of diabetes is increasing globally.The estimated incidence of 12.7 million new cancer cases in 2008 will rise to 21.4 million by 2030, with nearly two thirds of all cancer diagnoses occurring in low-and middle-income countries.http://www.who.int/nmh/publications/ncd_report_chapter1.pdf.A series of recent studies and meta-analyses confirm that the risk for several solid and hematologic malignancies (including liver, pancreas, colorectal, kidney, bladder, endometrial and breast cancers, and non-Hodgkin's lymphoma) is increased in patients with diabetes [6].Here the discussion follows on each malignancy with increased frequency in diabetes.
Pancreatic cancer
Type 2 diabetes mellitus is considered to be the third modifiable risk factor for pancreatic cancer after cigarette smoking and obesity.Based on the meta-analysis of 36 case-control and cohort studies, Everhart and Wright reported that the age and sex adjusted odds ratio for the development of pancreatic cancer in people with diabetes was 1.8 (CI of 1.7-1.9)[9].They also noted that increased frequency of pancreatic cancer occurs with long-standing diabetes, especially those with the duration of at least 5 years with a RR of 2.0 with CI of 1.2 to 3.2 [9].Gallo et al [10] from Italy reported that 40.2% of patients with pancreatic cancer and diabetes were diagnosed concomitantly or 15.9% were diagnosed within two years prior to diagnosis of cancer.Based on the data, the authors concluded increased prevalence of diabetes is related to pancreatic cancer and the diabetes is caused by the tumor [10].A causal relationship between diabetes and pancreatic cancer is also supported by findings from pre-diagnostic evaluations of glucose and insulin levels in prospective studies.Data show that up to 80% of patients with pancreatic cancer are either hyperglycemic or diabetic.Diabetes has been shown to improve after pancreatic-cancer resection, suggesting that diabetes is caused by the cancer [11].Pannala et al suggest new-onset diabetes may indicate subclinical pancreatic cancer, and patients with new-onset diabetes may constitute a population in whom pancreatic cancer can be detected early [11].A meta-analysis of three cohorts and six case-control studies revealed even a twofold risk in type 1 diabetes patients [12].A meta-analysis of 36 studies was carried out by Huxley and associates that included17 case-control and 19 cohort or nested case-control studies with information on 9220 individuals with pancreatic cancer [3].They noted that individuals with recent diagnosis of diabetes (< 4 years) had a 50% greater risk of the malignancy compared with individuals who had diabetes for ≥ 5 years (OR 2.1 vs. 1.5; p = 0.005).
Colorectal cancer
Increasing evidence suggests that a history of diabetes mellitus (DM) may be associated with an increased risk of colorectal cancer (CRC).In 2005, meta-analyses of 15 studies (six casecontrol and nine cohort studies) in the USA and Europe, including 2 593 935 participants, Larsson and associates found that diabetes was associated with an increased risk of colorectal cancer, compared with no diabetes (RR: 1.30; CI: 1.20-1.40][2].These results were said to be consistent between case-control and cohort studies and across the United States and Europe.The association did not differ significantly by sex, or by cancer sub-site.Diabetes was positively associated with colorectal cancer mortality.Results from a meta analysis of 41 cohorts was reported to support that diabetes was associated with an increased incidence of CRC (RR: 1.27;1.21-1.34)[13].In a recent systematic review and meta-analysis, twenty-four studies including eight case-control and 16 cohort studies, with a total of 3,659,341 participants were included [14].Meta-analysis of the 24 included studies indicated that diabetes was associated with an increased risk of colorectal cancer, compared with no diabetes (RR: 1.26; CI: 1.20-1.31),without heterogeneity between studies (p (heterogeneity) = 0.296).Sub-group analyses found that these results were consistent between case-control and cohort studies and among studies conducted in different areas.The association between diabetes and colorectal cancer incidence did not differ significantly by sex and sub-sites.Insulin therapy was also positively associated with risk of colorectal cancer (RR = 1.61; 1.18-1.35),with evidence of heterogeneity between studies (p (heterogeneity) = 0.014).
COX-2 and colon cancer
Although COX-2, the inducible isoform, is regularly expressed at low levels in colonic mucosa, its activity increases dramatically following mutation of the adenomatous polyposis coli (APC) gene, suggesting that β-catenin/T cell factor-mediated Wnt signaling activity may regulate COX-2 gene expression.In addition, hypoxic conditions and sodium butyrate exposure may also contribute to COX-2 gene transcription in human cancers [15].Because of its role in carcinogenesis, apoptosis, and angiogenesis, it is an excellent target for developing new drugs with selectivity for prevention and/or treatment of human cancers [16].
Breast cancer
Meta-analyses of 20 studies (5 case-control and 15 cohort studies) by Larsson and associates found that women with diabetes had a statistically significant 20% increased risk of breast cancer (RR, 1.20; CI:1.12-1.28)compared with no diabetes [4].The summary estimates were similar for case-control studies (RR, 1.18; CI: 1.05-1.32)and cohort studies (RR, 1.20; CI: 1.11-1.30).Metaanalysis of 5 cohort studies on diabetes and mortality from breast cancer yielded a summary RR of 1.24 and CI of 0.95-1.62 for women with versus without diabetes.Findings from this metaanalysis indicate that diabetes is associated with an increased risk of breast cancer [4].In the Nurses' Health Study, a total of 87,143 postmenopausal women, aged 30 to 55 years and free of cancer, were followed up for up to 26 years (1976-2002) and evaluated for the incidence of invasive breast cancer with increase in weight of at least 25.0 kg or more since age 18 years.Eliassen and associates noted an increased risk of breast cancer (RR: 1.45; CI: 1.27-1.66;p <. 001), with a stronger association among women who have never taken postmenopausal hormones (RR, 1.98; CI: 1.55-2.53).Data suggest that weight gain during adult life, specifically since menopause, increases the risk of breast cancer among postmenopausal women, whereas weight loss after menopause is associated with a decreased risk of breast cancer [17].
Hyperinsulinemia and cancer
Hyperinsulinemia most likely favors cancer in diabetic patients as insulin is a growth factor with pre-eminent metabolic but also mitogenic effects, and its action in malignant cells is favored by mechanisms acting at both the receptor and post-receptor level.Obesity, hyperglycemia, and increased oxidative stress may also contribute to increased cancer risk in diabetes [6].
Insulin
Insulin resistance and hyperinsulinemia are important factors in the development of type 2 diabetes.Insulin is known to stimulate cell proliferation and injection of insulin in rats promoted carcinogen-induced colon cancer [18].Insulin/insulin-like growth factor 1(IGF-1) receptors and G protein-coupled receptors (GPCR) signaling systems are implicated in autocrine-paracrine stimulation of a variety of malignancies, including ductal adenocarcinoma of the pancreas.Metformin, the most widely used drug in the treatment of type 2 diabetes, activates AMP kinase (AMPK), which negatively regulates mammalian target of rapamycin (mTOR) complex 1 (mTORC1) [19].Metformin was shown to significantly decrease the growth of pancreatic cancer cells xenografted into the flank of nude mice by interrupting the G proteincoupled receptor (GPCR), insulin receptor signaling by down-regulating the mTOR pathway [20].The GPCR and insulin receptor pathways are associated with increased DNA synthesis and pancreatic cancer cell growth.By negatively regulating GPCR and insulin receptor signally, and interrupting their cross talk, metformin is shown to decrease pancreatic cancer cell growth in mice.In a meta-analysis of epidemiological studies on markers of hyperinsulinemia and cancer, Pisani reported that subjects who develop colorectal and pancreatic cancers have increased pre-diagnostic blood levels of insulin and glucose [21].High insulin levels have also been shown to be associated with risk of endometrial cancer independent of estradiol [22].A link between breast cancer risk and hyperinsulinemia (measured by fasting C-peptide levels) has been shown mainly in postmenopausal breast cancer.Insulin levels were positively associated with endometrial carcinoma [HR: 2.33, CI: 1.13-4.82]among women not using hormone therapy [23].
Insulin resistance
The term insulin resistance denotes that action of insulin is impaired in peripheral target tissues that include skeletal muscle, liver, and adipose tissue.Recent literature supports the hypothesis that insulin resistance is a high risk for cancers.The molecular mechanisms for this association and the role in the neoplastic transformation process are being explored.Insulin is a major anabolic hormone that can stimulate cell proliferation.Adiposity induces adverse local and systemic effects that include adipocyte intracellular lipid accumulation, endoplasmic reticulum and mitochondrial stress, and insulin resistance, with associated changes in circulating adipokines, free fatty acids, and inflammatory mediators.Insulin resistance and associated hyperglycemia, hyperinsulinemia, and inflammation have been suggested to be the underlying mechanisms contributing to development of diabetes-associated pancreatic cancer.Hyperinsulinemia, insulin resistance and proinflammatory cytokines have been linked to neoplastic proliferation of various organ cells (Fig. 1).
In a study of the Polyp Prevention Trial of insulin and fasting glucose and risk of recurrent colorectal adenomas, Flood et al. noted the association of increased risk of adenoma recurrence and risk for recurrence of advanced adenomas with increased insulin [24].
Insulin-like Growth Factor-1 (IGF-1) and cancer
The IGF (insulin-like growth factor) system is essential for physiological growth.The IGF complex includes IGF-1 and IGF-2, their corresponding receptors (IGFR-1 and IGFR-2), IGF binding proteins 1-6 (IGFBPs), insulin receptor substrate (IRS).The signaling pathway of IGF plays a critical role in cellular proliferation and inhibition of apoptosis.Though growth hormone is the primary stimulus for IGF-1 production in the liver and insulin can increase the IGF-1 production by up-regulating growth hormone receptors in the liver, hyperinsulinemia can also increase IGF-1 bioavailability by decreasing hepatic secretion of IGF-binding protein (IGFBP)-1 and −2 [25].IGF-R, a tyrosine kinase receptor for IGF-I and IGF-II is said to play a role in malignant transformation, progression, protection from apoptosis, and metastasis as documented in cell culture, animal and human studies [26].Since the expression of IGF-1 receptors occurs in several cancers, the effects of insulin on cancer cell proliferation in vivo may involve IGF-1 stimulation and indirectly stimulate cancers.The IGF signaling pathway is involved in cell proliferation and differentiation and inhibits apoptosis.Increased expression of IGF-1, IGF-2, IGF-1R, or combinations have been documented in various malignancies including glioblastomas, neuroblastomas, meningiomas, medulloblastomas, carcinomas of the breast, malignancies of the gastrointestinal tract, such as colorectal and pancreatic carcinomas, and ovarian cancer [27].Higher IGF-1 levels were reported to be associated with increased colorectal adenoma risk (ORs = 1.58; 1.16-2.16),[28] and inversely associated with endometrial carcinoma (HR: 0.53; 0.31-0.90)[23].
Adiponectin and cancer
Adiponectin, which is also referred to as ACP30 (Acrp30), is secreted predominantly by white adipose tissue [29].Circulating concentrations of adiponectin are reduced in obesity and type 2 diabetes [30][31][32].Adiponectin is considered to have beneficial antineoplastic effects, which are believed to be due to anti-proliferative, anti-inflammatory effects, along with antagonizing insulin resistance [33].Adiponectin has been found to be an important negative regulator of hematopoiesis and the immune system as adiponectin was shown to suppress the growth of myelomonocyte cell lines in vitro by inducing apoptosis in myelomonocytic progenitor cells (leukaemia lines) and modulating expression of apoptosis-related genes and down regulating Bcl-2 gene expression [34].Epidemilogical data have also shown a link between low adiponectin levels and renal cell cancer [35,36]; especially large tumor size [37,38].Adiponectin was inversely associated with non-Hodgkin lymphoma and acute myeloblastic leukaemia (OR: 0.56; 0.34-0.94),but not with acute lymphoblastic leukaemia of B or T cell [39].In a number of epidemiological studies, adiponectin levels have been linked to breast cancer and are believed to inhibit breast cancer cell proliferation in vivo.This effect may be due to adiponectin-triggered cellular apoptosis in MDA-MB-231 breast cancer cells in the presence of 17β-estradiol.These findings may suggest that a cross-talk between adiponectin and estrogen receptor signaling exists in breast cancer cells and that adiponectin effects on the growth and apoptosis of breast cancer cells in vitro are dependent on the presence of 17β-estradiol [40].Low serum adiponectin is associated with colon, prostate and breast cancer [41].In a recent study, plasma adiponectin level was associated with decreased colorectal cancer risk [42].
Adiponectin and colorectal cancer: Adiponectin was shown to act on preneoplastic colon epithelial cells to regulate cell growth by inducing autocrine IL-6 production and trans-IL-6 signaling.In a prospective case control study, men with low plasma adiponectin levels were said to have a higher risk of colorectal cancer than men with higher levels [43].Meta analysis of 13 studies in patients with colorectal cancer and adenoma, though there was significant heterogeneity among studies, noted that there was a negative dose response relationship between levels of adiponectin and the risk of colorectal neoplasm in men [44].
Adiponectin and Endometrial cancer: Circulating adiponectin concentrations are inversely correlated with the incidence of endometrial carcinoma in epidemiological studies.In a study that investigated the direct effects of adiponectin on two endometrial carcinoma cell lines, HEC-1-A and RL95-2, adiponectin treatment led to suppression of cell proliferation in both cell types, which was primarily believed to be due to the significant increase of cell populations at G 1 /G 0 phase and secondary to the induction of apoptosis [45].
Cyclooxygenase and cancers
Cyclooxygenase-2 (COX-2) over expression has been found in several types of human cancers, such as colon, breast, prostate, and pancreas, and appears to control many cellular processes.The contribution of COX-2 to carcinogenesis and the malignant phenotype of tumor cells have been thought to be related to its abilities to: (1) increase production of prostaglandins, (2) convert procarcinogens to carcinogens, (3) inhibit apoptosis, (4) promote angiogenesis, (5) modulate inflammation and immune function, and ( 6) increase tumor cell invasiveness [62].
Proinflammatory cytokines
Adipocytes secrete a number of proinflammatory cytokines.These cytokines are known to promote insulin resistance and increase circulating TG, features of the metabolic syndrome.Several cytokines, reactive oxygen species (ROS), and mediators of the inflammatory pathway, such as activation of nuclear factor-κB (NF-κB) and COX-2, lead to an increase in cell proliferation, survival, and inhibition of the proapoptotic pathway, ultimately resulting in tumor angiogenesis, invasion, and metastasis [16].Proinflammatory cytokines implicated in carcinogenesis include IL-1, IL-6, IL-15, colony-stimulating factors, TNF-α, and the macrophage migration inhibitory factor.
Macrophage inhibitory cytokine-1 (MIC-1), also known as prostate-derived factor (PDF), is a molecule of the TGF-β super family and has been associated with the progression of various types of diseases including prostate cancer [63].Collectively, cytokines are considered as a linker between inflammation and cancer.Cytokines, ROS, and mediators of the inflammatory pathway (e.g., NF-κB and COX-2) have been shown to increase cell cycling, cause loss of tumor suppressor function, and stimulate oncogene expression and lead to cancers.Positive feedback mechanisms between estrogens and inflammatory factors may exist in the breast and contribute to hormone-dependent breast cancer growth and progression [64].Prostaglandin E synthase (PTGES) is also up-regulated by the proinflammatory cytokines TNF-α or IL-1β.Cytokines can enhance estrogen receptor (ER) activity and PTGES expression through the NF-κB pathway and cytokines can act to up-regulate aromatase expression as well as 17βhydroxysteroid dehydrogenase activity in breast tissue, thereby leading to a further increase in E2 production [64].
Diabetes therapies and cancer
Diabetes is associated with increased risks of bladder, breast, colorectal, endometrial, kidney, liver and pancreatic cancer and a lower risk of prostate cancer.Diabetes treatments may influence the risk of cancer independently of their effect on glycemia.This may complicate the issues in investigation of the association between diabetes and cancer.Epidemiologic studies have suggested a protective role for metformin.On the other hand, Glargine, the most widely used long-acting insulin analogue, may confer a greater risk than other insulin preparations, particularly for breast cancer.In general, diabetes therapies that are said to be associated with increased risk of cancer include, use of insulin, sulfonyl urea and DPP4 inhibitors.Diabetes therapies that are shown benefit by decreasing the cancer risk include use of metformin and thiazolidinediones.Here we will discuss association of cancer risk with each of the diabetes therapeutic agents.
Thiazolidinedione (TZD) and cancer risk in type 2 diabetes
TZDs are reported to decrease in cancer both in clinical data series and in vitro studies.In addition pioglitazone, one of the TZDs, was shown to increase the risk of bladder cancer in those with the use for more than 24 months.Based on the randomized clinical trials of rosiglitazone with duration of >24 weeks that includes eighty trials enrolling 16,332 and 12,522 patients in the rosiglitazone and comparator groups, Monami and associates reported that the incidence of malignancies was significantly lower with the use of rosiglitazone than in control groups (RR: 0.23; CI: 0.19-0.26)vs. RR of 0.44(CI:0.34-0.58)cases/100 patient-years; p < 0.05) [65].In a study using the diabetes registry of Hong Kong, Yang and associates reported that TZD usage was associated with 83% reduction in cancer risk in Chinese patients with type 2 diabetes in a dose-response manner [66].Using the Taiwan National Health Insurance claims database, significantly lower risk of liver cancer incidence was found for any use of rosiglitazone or pioglitazone; use of rosiglitazone but not pioglitazone was associated with decreased incidence of colorectal cancer [67].
Pioglitazone and bladder cancer
Using the Kaiser Permanente Northern California diabetes registry with 193,099 patients who were ≥40 years of age between 1997 and 2002, excluding those with prior bladder cancer, 30,173 patients treated with pioglitazone were identified.In this cohort of patients with diabetes, short-term use of pioglitazone was not associated with an increased incidence of bladder cancer, but use for more than 2 years was weakly associated with increased risk [68].
Using the general practice research database in the United Kingdom, use of pioglitazone more than 24 months was reported to be associated with an increased risk of incident bladder cancer among people with type 2 diabetes [69.Using data from the French national health insurance information system, in a population based study, pioglitazone use of more than 24 months was reported significantly associated with increased risk of bladder cancer [70].
Metformin
Studies of patients with T2DM on metformin have demonstrated a lower risk of cancer [71][72][73][74].In a recent study of the influence of treatment with metformin on survival after cancer diagnosis by Currie and associates, metformin was shown to be associated with survival benefit both in comparison with other treatments for diabetes and in comparison with a nondiabetic population [75].An observational cohort study from the United Kingdom suggested that metformin use may be associated with a reduced risk of cancer (HR:0.63(0.53-0.75) [72].The study noted that the reduced risk was after adjusting for sex, age, BMI, A1C, smoking, and other drug use [72].In a different database study from UK general practices, metformin use was reported to be associated with lower risk of cancer of the colon or pancreas, but did not affect the risk of breast or prostate cancer [71].Metformin use was associated with survival benefit in comparison with other treatments for diabetes and also in comparison with a nondiabetic population [75].Metformin has been associated with reduced risk of pancreatic cancer in diabetics and recognized as an antitumor agent with the potential to prevent and treat this cancer [76].A retrospective cohort study to investigate the survival benefit of metformin in patients with diabetes and pancreatic malignancy, from the MD Anderson Cancer Center from 2000 to 2009, reported that metformin users have a significant survival benefit compared to non-users (the median survival 16.6 vs. 11.5 months; p = 0.0044) [77].They also report a 33% decrease risk of death in patients who used metformin (HR: 0.67; p = 0.005) [77].This implies that metformin may have some beneficial effects on slowing the progression of pancreatic malignancy.However, specific pathogenesis is unclear and would have to be further explored.In an interesting finding from a data base study from Hong Kong, nonusers of metformin with low HDL cholesterol had an adjusted hazard ratio of 5.75 (CI: 3.03-10.90)compared with HDL cholesterol ≥ 1.0 mmol/L plus use of metformin [78].The reduction in cancer risk with the use of metformin in patients with type 2 diabetes is said to be dose related [74].In a Canadian population-based cohort study, Bowker and associates noted that patients with type 2 diabetes exposed to sulfonylureas and exogenous insulin had a significantly increased risk of cancerrelated mortality compared with patients exposed to metformin [79].In addition they also noted that, the sulfonylurea cohort had greater cancer-related mortality compared with the metformin cohort after multivariate adjustment [79].There are several in vitro and in vivo studies from cell lines and animal models support the benefit of metformin against cancer.There are several ongoing trials to examine the clinical outcomes.
Metformin and individual cancers
Long term use of metformin was shown to decrease risk of breast cancer in female patients with type 2 diabetes [80].In a case-control study using the U.K.-based General Practice Research Database, metformin use was associated with an adjusted odds ratio of 0.44 (CI: 0.24-0.82)for developing breast cancer compared with no use of metformin [81].In a similar study from the UK, long-term use of ≥ 40 prescriptions (>5 years) of metformin, based on 17 exposed case patients and 120 exposed control patients, was associated with an adjusted odds ratio of 0.44 (95% CI 0.24-0.82)for developing breast cancer compared with no use of metformin [80].A meta-analysis of 17 case-control studies and 32 cohort studies of diabetes and hepato-cellular carcinoma, metformin treatment was potentially protective [82].In a meta-analysis of five studies comprising 108,161 patients with type 2 diabetes, metformin therapy appears to be associated with a significantly lower risk of colorectal cancer in patients with type 2 diabetes [82].
Mechanism of metformin in reducing cancer
It has been postulated that the effect of metformin on cancer development and progression may be a result of decreased levels of insulin [83] and insulin resistance.However, the possible anticancer effect of metformin is believed to be mediated by the inhibition of mitochondrial oxidative phosphorylation leading to activated AMPK pathway or independent of non-AMPK pathway.Human breast cancer cells treated with metformin demonstrate inhibited proliferation and colony formation and increased cell cycle arrest [84].Studies have shown that metformin also has a direct effect on tumor cell proliferation [85].As stated previously, metformin activates AMPK.The AMPK/mTOR axis is modulated by liver kinase B1 (LKB1).LKB1 is a tumor suppressor that activates AMPK, leading to mTOR inhibition, resulting in inhibited cell growth [85].In vitro studies have shown that treatment with metformin inhibits cancer cell lines such as breast cancer breast cancer cells, [86] prostate cancer cell lines, [87,88] glioma, [89] and fibro sarcoma cell lines [90].
Therapeutic considerations in general
Therapeutic considerations need to focus on reduction of the risk factors.Various therapeutic interventions for weight reduction and healthy life style have been linked to a reduced cancer risk in the general population.Therapeutic strategies to decrease chronic hyperinsulinemia and insulin resistance may offer a general approach to prevention of cancer.Metformin is the insulin sensitizer used primarily in the treatment of type 2 diabetes mellitus.
In a retrospective study of long term benefits of bariatric surgery, a significant decrease in mortality from cancer-related deaths in the bariatric surgery group compared both with all subjects and matched subjects with a decrease of 60% for cancer at mean follow-up of 7.1 years.Anticytokine vaccines, inhibitors of proinflammatory NF-κB and COX-2 pathways, thiazolidinediones, and antioxidants are potentially useful for the prevention or treatment of pancreatic cancer.Similarly epidemiologic studies have documented a 40-50% reduction in the incidence of colorectal cancer in individuals taking nonsteroidal anti-inflammatory drugs (NSAIDs).The long-term use of COX-2-selective inhibitors has, unfortunately, demonstrated cardiovascular toxicity, so their use in cancer prevention and therapy is currently questionable.However, there is evidence suggesting that further development of novel COX-2-selective agents is needed for the prevention and/or treatment of human cancers, especially pancreatic cancer.Targeting PGE 2 signaling by EP receptor antagonists holds promise for the development of targeted therapy for the treatment of cancer [91].PPARs also play a role in the regulation of cancer cell growth.Recent evidence suggests that PPAR modulators may have beneficial effects as chemopreventive agents [92].
Recent clinical studies with IGF-1R inhibitors have revealed several obstacles to successful use in cancer therapy.Strategies to inhibit IGF-1R signaling such as tyrosine kinase inhibitors also disrupt IR signaling, resulting in hyperglycemia and hyperinsulinemia.Several strategies being considered are based on biomarkers that could identify subpopulations most likely to be responsive to IGF targeting.The combination therapies with other targeted drugs could maximize the therapeutic effects of IGF inhibitors [93].
Chemoprevention of colorectal cancer
Of cancers affecting both men and women, colorectal cancer (cancer of the colon and rectum) is the second leading cancer killer in the United States and the incidence increases with age.The U.S. Preventive Services Task Force report on the effectiveness of aspirin (ASA), nonaspirin non steroidal anti-inflammatory drugs (non-ASA NSAIDs), and cyclooxygenase-2 inhibitors (COX-2 inhibitors) for the chemoprevention of colorectal cancer indicate that aspirin and non-ASA NSAIDs appear to be effective at reducing the incidence of CRAs and CRC [7].The report also stated that more information is required to clarify the optimal dose, starting age, and duration of use of ASA since observational studies suggest that higher doses and prolonged use improve chemo-preventative efficacy.In a recent systematic review and metaanalysis for randomized controlled trials (RCTs) from United Kingdom, Cooper and associates identified 44 relevant RCTs and six ongoing studies [8].They reported that there was a statistically significant 21% reduction in risk of adenoma recurrence (RR: 0.79; CI: 0.68 to 0.92) in an analysis of aspirin versus no aspirin in individuals with a history of adenomas or CRC.In the general population, a significant 26% reduction in CRC incidence was demonstrated in studies with a 23-year follow-up (RR: 0.74; CI: 0.57 to 0.97).In individuals with a history of adenomas there was a statistically significant 34% reduction in adenoma recurrence risk (RR: 0.66; CI: 0.60 to 0.72) and a statistically significant 55% reduction in advanced adenoma incidence (RR 0.45; CI: 0.35 to 0.58).No studies assessed the effect of non-aspirin NSAIDs in the general population.There was said to be no significant effect of folic acid versus placebo on adenoma recurrence (RR: 1.16; CI: 0.97 to 1.39) or advanced adenoma incidence in individuals with a history of adenomas.In the general population there was said to be no significant effect of folic acid on risk of colorectal cancer (RR: 1.13; CI: CI: 0.77 to 1.64), although studies were of relatively short duration.Calcium use by familial adenomatous polyposis (FAP) patients was reported to have no significant reduction in polyp number or disease progression.In individuals with a history of adenomas there was said to be a statistically significant 18% reduction in risk of adenoma recurrence (RR: 0.82; CI: 0.69 to 0.98) and a non-significant reduction in risk of advanced adenomas (RR: 0.77; CI: 0.50 to 1.17).Though these studies are not selective for subjects with diabetes, prevention of colorectal cancer in subjects with diabetes is important and relevant.
Table 1 .
Risk factors common to both diabetes and cancer | 7,203 | 2013-06-26T00:00:00.000 | [
"Medicine",
"Biology"
] |
Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors
Exponential Smooth Transition Autoregressive (ESTAR) models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.
Introduction
The theoretical perception that transactions costs can bring non-linear change of deviation from equilibrium circumstances ( [1], [2], [3], [4], [5]) can describe the economic performance from a linear perspective of various variable that look non-stationary. One nonlinear form that captures this type of behavior is the smooth transition model ESTAR model. Nonlinear models of the ESTAR form can imply near unit root behavior near equilibrium. These models have been a great choice for nonlinear modeling in non-stationary context.
A few researchers ( [6], [7], [8] and [9]) also worked on smooth transition model. Kapetanios G., et al. [10], worked on the unit root testing of particular kind in nonlinear dynamics and provides an alternative frame work to test the unit root against nonlinear ESTAR, which is globally stationary.
Kapetanios G., et al. [10] has suggested a univariate testing procedure to identify the existence of nonstationarity against the nonlinear ESTAR process. He has also derived a nonstandard limiting distribution of Kapetanios tests. He then has examined the power/size performance of the small sample size and has found that in many cases under the alternative stationary ESTAR process has a power gain over the Dickey fuller test. We are going to use this test but with different error distribution to find the critical values and then the size and power of the test. Cook [11] states that when the degrees of freedom of the t-distribution innovations are lower, the amount of oversizing is larger. Due to that reason we use 5 degrees of freedom in the innovation process. Cook has found the results that experts may experience false rejection while studying heavy tailed data when they use normally distributed errors for obtaining critical values.
More or less every function which is included in nonlinear regression model can be written in closed form. The use of parameters in the functional part of a nonlinear regression model is rarely studied. We have used the quantile fitting and had compared it with the nonlinear least square for the ESTAR model in the presence of outliers. This technique proposes a system for estimating models for the conditional quantile function and the median function. To study the stochastic relationships among random variables, quantile regression is capable of providing a more complete statistical analysis.
The structure of the model at time t depends on the variable s t and the value of F(s t ;γ,c). The choice of F(s t ;γ,c) gives birth to different categories of models. This includes logistic STAR (LSTAR) model, self-exciting TAR (SETAR) model used by [14] and [15] in experimental studies.
In many cases when the domains are associated with small and large absolute values of s t , it is more suitable to specify the transition function. For this purpose the exponential type function can be used such as Where γ evaluate the tempo of shift from one domain to another and signifies the location for threshold value for s t . Replacing for the exponential function of Eq (2) in Eq (1) we get the exponential smooth transition autoregressive model. The transition function of ESTAR is symmetric and U-shaped around c. Michael et al. (1997), Sarantis (1999) and Taylor et al. (2001) applied this model has been applied to real exchange rate by a few researchers ( [16], [17]).
Testing Unit Root Hypothesis against ESTAR Using Heteroscedastic Consistent Covariance Matrix Estimators (HCCM)
We will consider the following univariate ESTAR model of order 1 the model used by [10]. Rearranging Eq (3) putting c = 0 and replacing the transition variable s t by a lagged dependent variable, y t−d for d > 0 gives us Where β = ϕ−1. Using the conditions β ! 0, γ < 0 and β + γ < 0, Kapetanios has proved the geometric ergodicity for the ESTAR process.
Putting β = 0, in Eq (4) consequently giving that in the central regime y t trail a unit root process. We set the delay parameter d = 1, which gives Our null hypothesis here will be that H 0 : θ = 0 against the alternative hypothesis that H 1: θ > 0. Testing the above null hypothesis is not practicable because of the fact that under the null hypothesis γ is not recognized. The issue has been resolved in [10] by introducing a t-type test using the auxiliary regression This has been computed from the first-order Taylor approximation to ESTAR. The Kapetanios t-statistics suggested is t NL ¼d=s:eðdÞ ð7Þ Whered is the estimate of δ and s:eðdÞ is the standard error ofd. Table 1 bellow shows the appropriate asymptotic critical values calculated from the simulations studies with DGP used as AR(1) process with unit root hypothesis With sample size 10,000 and with 50,000 replications. In the general case (case 1) we use the constant variance (homoscedasticity) in the estimation of the asymptotic critical values. Using conventional tests is dangerous [18] though the best performance among these is a HCCME used by [19]. We have also used the hetroscedastic consistent covariance matrix (HCCM) in the estimation of standard error which was introduced by [20] and [21]. In case 2 we have used the HC0, which estimates the provisional variance of the error for each mold of y i . HC0 is defined as In the presence of an unknown form, HC0 is a consistent estimator of the variance of the parameter.
In case 3 and 4 we have used the HC2 and HC3 which adjusts each ε 2 i as how much the case influences the estimates of the coefficient. Mackinnon and White [19] proposed Where h i is the i th element of the projection matrix. If the model is homoscedastic in actual fact, this estimate is unbiased. A third variation approximates additional intricate Jacknife estimator of Efron [22] as HC3 constructs confidence intervals that lean to be yet further conventional. HC0 was outperformed by HC2 and HC3
Using Wild Bootstrap Technique to Find the Size of Test in ESTAR Models
The bootstrap is a computationally rigorous technique commenced by [23]. Bootstrap resampling methods have been come forward as an authoritative tool for building inferential procedures in contemporary statistical data analysis. The inferential constituent of a statistical analysis typically consists of constructing confidence intervals, connecting a standard error to an estimator, hypothesis testing, prediction region constructing or choosing a regression equation. Habitually it is wholly required to estimate the sampling distribution of some statistic. There are different types of bootstrap resampling. We are using the wild bootstrap technique for finding the size of our test which has improvement over other resampling in small sample cases. It has been shown [24] that using Wild Bootstrap where there is no hetroscedasticity gives "better results" than employing standard bootstrap (with replacement) when there is hetroscedasticity. In this technique we resample with replacement the residuals that have been estimated from the initial fit, multiplied randomly by -1 or 1.Symmetry is assumed in this technique for the true residual distribution. To find the size of the test we use the DGP in Eq (8) with ε t drawn from t-distribution with 5 degrees of freedom and standard normal distribution separately in two tables below. The results shown in Table 2 and Table 3 using Kapetanios tstatistics and using wild-bootstrap technique respectively with 50,000 replications and the bootstrap sample has been taken to be 1000.
It is evident from the above table that the size of the test improves for Kapetanios test using the wild bootstrap technique.
Comparing Power of Kapetanios Test with Wild Bootstrap Procedure
For the comparison of the power performance of the two methods under discussion we use γ = −1, θ = 0.05, 0.02 and 0.001 in the data generation process And the error distribution in the DGP is taken from random normal and random t-distribution with 5 degrees of freedom, which is the finite variance case so that the variance, skewness and shape of the distribution can be found if needed. Results are shown in the following Tables 4-9.
Outliers are generated on the same process of contamination which is discussed below in section 6 to check as if it effect the power of the Kapetanios test or not. Tables 4 to 6 show that in case of contaminated data there is a significant decrease in the power of the Kapetanios test.
The above tables show that results in comparing the power performance of the two methods. It results in the significance increase in the power using the bootstrap technique when the errors are generated from normal distribution. In case when errors are generated from t-distribution, Kapetanios test performs better. Extreme care should be taken in applying the resampling technique when there are outliers especially when taking large samples from such data may give very misleading results.
Comparing the Power of Nonlinear Least Square with Quantile Regression Fit in the Presence of Outliers
Quantile regression has been introduced by [25], is a statistical method planned to estimate, and carry out inference about, restricted quantile functions. Quantile regression technique proposes a system for estimating models for the provisional median function, and the complete range of other conditional quantile functions. This function is able to provide a more precise analysis of the associations between random variables by combining the estimation of restricted mean functions and procedure for estimating a full class of restricted quantile functions. The DGP used for the purpose of comparison between the quantile fit and the least square regression in the nonlinear models is as follows Where ε t is normally distributed with zero mean and a constant variance equals to 1 for Tables 10 and 11 and ε t has a t-distribution with d.f equals 5 in Tables 12, 13 and 14.
Additive outliers are included in the data before the comparison of the two above mentioned methods. We consider the following contaminated model with the addition of outliers Where y c t denotes the response variable with contaminated data, δ t = y t + ϑ for some constant ϑ (theta used in the tables below) and δ equals 1 with probability "π" and "0" otherwise. This approach is also used by [26] and also by [27]. The difference here is that we are comparing it to the quantile fitting to the ESTAR model.
We fit the following model with the parameter values a's are predefined for the nonlinear least square. We are interested in a 2 less than zero, here we took a 3 = 1 as in most economic applications of first order ESTAR y t ¼ a 1 þ a 3 Ã ðy tÀ 1 À a 1 Þ Ã expfa 2 Ã ðy tÀ 1 À a 1 Þ 2 g þ ε t ð15Þ and the following model with parameter values b's are predefined for the nonlinear quantile fit The results are shown in the following Tables 10 to 14. Tables 10 to 14 compares the power of the nonlinear least square fit and a type of robust fit called the quantile fit in the presence of outliers. In Tables 10 and 11 the error distribution in the data generation process is normal and in the other three tables the error is from t-distribution with 5-d.f. General quantile fit performs better in the presence of outliers because quantile fits take into account the median of the process instead of the mean and the median is not effected by outliers. We have computed the power of nonlinear time series ESTAR model to check the correct fitting of the nonlinear model when the data is generated from a nonlinear process. We can see from the above four tables that the performance of the quantile fit has improved in most of the situation especially when the contamination is high. The difference in the power performance of the two methods of model fitting remains unchanged for different sample sizes. Unit Root Testing and Estimation in Nonlinear ESTAR. . .
Conclusion
We have tested unit root hypothesis against the ESTAR in the nonlinear framework and had obtained the critical values for testing the null hypothesis using non-normal errors (skewed and symmetric) for the Kapetanios test. We also have computed the size of the Kapetanios test for the sample size 50, 100, 250, 500 and 1000 and have compared it with the bootstrap testing procedure and found that the bootstrap procedure gives much closer results to the exact size of the test and hence reduces the problem of over sizing, which we have noticed in the simulations done by Cook [11]. The power of the wild bootstrap is computed and compared with the Kapetanios test statistics, which shows a significant improvement when wild bootstrap technique is used in case when the error is generated from normal distribution and Kapetanios test performs better in case when the error is generated from t-distribution. We have taken the error distribution in the DGP as normal in one case and t-disttn; with 5-degrees of freedom in the second case. In the last section we have estimated the power of ESTAR model on the basis of non-linear least square and have compared it with that of quantile fit which is supposed to be more robust and a type of median estimator. The R-code is defined for this simulations in S1 File. The results show for the data given in S2 File or S3 File, a slight increase in the power of the quantile fit for the nonlinear testing against the least square model fitting. Hence we conclude that the nonlinear quantile model fitting performs efficiently than the nonlinear least square model in the presence on outliers
Author Contributions
Conceptualization: UK A.
Data curation: UK A AA DMK.
Formal analysis: UK ZK. | 3,558.8 | 2016-11-29T00:00:00.000 | [
"Mathematics"
] |
The Electronic Effects of 3-Methoxycarbonylcoumarin Substituents on Spectral, Antioxidant, and Protein Binding Properties
Coumarin derivatives are a class of compounds with pronounced biological activities that depend primarily on the present substituents. Four 3-methoxycarbonylcoumarin derivatives with substituents of different electron-donating/electron-withdrawing abilities (Br, NO2, OH, and OMe) were investigated structurally by NMR, IR, and UV-VIS spectroscopies and density functional theory methods. The appropriate level of theory (B3LYP-D3BJ/6-311++G(d,p) was selected after comparing similar compounds’ experimental and theoretical structural parameters. The natural bond orbital and quantum theory of atoms in molecules were employed to investigate the intramolecular interactions governing stability. The electronic effects of substituents mostly affected the aromatic ring that the substituents are directly attached to. The antioxidant properties were investigated by electron paramagnetic resonance spectroscopy towards HO•, and the percentages of reduction were between 13% (6-Br) and 23% (6-OMe). The protein binding properties towards transport proteins were assessed by spectrofluorimetry, molecular docking, and molecular dynamics (MD). The experimentally determined binding energies were well reproduced by molecular docking, showing that the spontaneity of ibuprofen binding was comparable to the investigated compounds. The flexibility of HSA in MD simulations depended on the substituents. These results proved the importance of electronic effects for the protein binding affinities and antioxidant properties of coumarin derivatives.
Introduction
Coumarin and its derivates belong to the large benzopyrone family, structurally characterized by the benzene ring joined to a pyrone ring (Figure 1a). This simple structure is the parent molecule of all coumarins and represents the base of much more complex compounds [1]. Coumarins and their derivatives are naturally distributed in many plants' seeds, roots, leaves, and fruits (e.g., blueberries, green tea, and essential oils) [2]. In plants, coumarins are involved in the processes of growth and photosynthesis [3]. Microbial sources of coumarins are also described in the literature [4]. Some coumarins, such as novobiocin and coumermycin, are obtained from Streptomyces species and aflatoxins from Aspergillus species. Coumermycin, structurally similar to novobiocin, is nearly 50 times more potent than novobiocin against Escherichia coli and Staphylococcus aureus. Coumermycin also inhibits the supercoiling of DNA catalyzed by Escherichia coli DNA gyrase [5]. [12][13][14], anti-inflammatory [15,16], anticoagulant effects, and central nervous system stimulant effects [17] make coumarins interesting in modern medicinal chemistry. A potent antioxidant and protective effect against oxidative stress by scavenging the reactive oxygen species has also been reported for hydroxycoumarins [18][19][20]. In addition, the discovery of coumarins with weak estrogenic activity included this type of compound in the prevention of menopausal distress [2]. Coumarin removes protein and edema fluid from injured tissue by stimulating phagocytosis and enzyme production [21]. The electronic effects of substituents influence the stability and reactivity of coumarin derivatives, as well as many of the biological and pharmacological properties [22][23][24][25], such as lipophilicity, bioavailability [26], radical scavenging activity, and acetylcholinesterase inhibition [27]. Therefore, the electron donating/withdrawing effects of four common substituents (Br, NO2, OH, and OMe) on 3-methoxycarbonylcoumarins' structure ( Figure 1b), NMR, IR, UV-VIS, antioxidant, and protein binding activity are investigated experimentally and theoretically in this contribution. Density functional theory was applied to predict, compare, and assign the mentioned spectra with the experimental ones. The antioxidant activity of these derivatives towards the hydroxyl radical, a biologically relevant free radical, was investigated by electron paramagnetic resonance spectroscopy. The binding of selected derivatives towards bovine serum albumin was analyzed by spectrofluorimetry. Additionally, molecular docking and molecular dynamics studies were used to access the binding mode at the molecular level and to quantify the possible interactions.
Structural Optimization, NBO, and QTAIM Analysis
The selection of an appropriate DFT level of theory is often based on comparing optimized and crystallographic structures. As the experimental data are unavailable for the investigated compounds, the authors have selected two structurally similar molecules from the Cambridge crystallographic data center, namely coumarin-3-carboxylic acid (1) and 3-acetylcoumarin (2) (Figure 2). These compounds have the same core but differ in Coumarins have several attractive characteristics, such as low molecular weight, simple structure, high bioavailability, and high solubility in most organic solvents and oils, which, together with their multiple biological activities, ensure a prominent role in drug research and development. The extraction, synthesis, and evaluation of coumarins have become a rapidly developing topic because of their beneficial effects on human health, such as reducing the risk of cancer, diabetes, and cardiovascular and brain diseases [6]. Their antitumor [7][8][9][10], photochemotherapy, anti-HIV [11], antibacterial and antifungal [12][13][14], anti-inflammatory [15,16], anticoagulant effects, and central nervous system stimulant effects [17] make coumarins interesting in modern medicinal chemistry. A potent antioxidant and protective effect against oxidative stress by scavenging the reactive oxygen species has also been reported for hydroxycoumarins [18][19][20]. In addition, the discovery of coumarins with weak estrogenic activity included this type of compound in the prevention of menopausal distress [2]. Coumarin removes protein and edema fluid from injured tissue by stimulating phagocytosis and enzyme production [21].
The electronic effects of substituents influence the stability and reactivity of coumarin derivatives, as well as many of the biological and pharmacological properties [22][23][24][25], such as lipophilicity, bioavailability [26], radical scavenging activity, and acetylcholinesterase inhibition [27]. Therefore, the electron donating/withdrawing effects of four common substituents (Br, NO 2 , OH, and OMe) on 3-methoxycarbonylcoumarins' structure ( Figure 1b), NMR, IR, UV-VIS, antioxidant, and protein binding activity are investigated experimentally and theoretically in this contribution. Density functional theory was applied to predict, compare, and assign the mentioned spectra with the experimental ones. The antioxidant activity of these derivatives towards the hydroxyl radical, a biologically relevant free radical, was investigated by electron paramagnetic resonance spectroscopy. The binding of selected derivatives towards bovine serum albumin was analyzed by spectrofluorimetry. Additionally, molecular docking and molecular dynamics studies were used to access the binding mode at the molecular level and to quantify the possible interactions.
Structural Optimization, NBO, and QTAIM Analysis
The selection of an appropriate DFT level of theory is often based on comparing optimized and crystallographic structures. As the experimental data are unavailable for the investigated compounds, the authors have selected two structurally similar molecules from the Cambridge crystallographic data center, namely coumarin-3-carboxylic acid (1) and 3-acetylcoumarin (2) (Figure 2). These compounds have the same core but differ in the present substituents from the investigated coumarin derivatives. Three common functionals (B3LYP-D3BJ, M06-2X, and APFD) were selected for optimization, as these provided satisfactory results in previous studies on similar compounds [28,29]. Two parameters were used to compare experimental and theoretical data, the correlation coefficient (R) and mean absolute error (MAE) ( Table 1). The latter represents an average value of the absolute differences between the values of two data sets. The experimental and optimized bond lengths and angles for both compounds are shown in Tables S1-S4. angles were used further for the selection. The values of R in the case of 1 are similar and range between 0.936 (M06-2X) and 0.945 (APFD), while the same parameter for 2 is between 0.979 (M06-2X) and 0.983 (B3LYP-D3BJ). The lowest MAE values for bond angles were calculated for B3LYP-D3BJ. Therefore, the B3LYP-D3BJ/6-311++G(d,p) theory level was selected to optimize coumarin derivatives. The same level of theory was previously successfully applied for the spectral assignation and reactivity investigation of similar systems [29][30][31][32]. Four coumarin derivatives containing different substituents were optimized at the mentioned level of theory, and their structures are presented in Figure 2. These derivatives contain substituents with various electron-donating/electron-withdrawing effects. The bond lengths and angles are provided in Tables S5 and S6. The investigated coumarin The results shown in Table 1 prove that the selected functionals optimize the crystallographic structure well. This can be expected as the chosen structures were rigid with extended delocalization throughout a major part of a molecule. The correlation coefficients for bond lengths of 1 obtained by three common functionals are almost equal, while in the case of 2, the values range from 0.995 (APFD) to 0.998 (M06-2X). The MAE values are between 0.014 and 0.15 Å for 1 and between 0.006 and 0.007 Å for 2. Based on these values, a clear distinction between functionals' performances cannot be made, and bond angles were used further for the selection. The values of R in the case of 1 are similar and range between 0.936 (M06-2X) and 0.945 (APFD), while the same parameter for 2 is between 0.979 (M06-2X) and 0.983 (B3LYP-D3BJ). The lowest MAE values for bond angles were calculated for B3LYP-D3BJ. Therefore, the B3LYP-D3BJ/6-311++G(d,p) theory level was selected to optimize coumarin derivatives. The same level of theory was previously successfully applied for the spectral assignation and reactivity investigation of similar systems [29][30][31][32].
Four coumarin derivatives containing different substituents were optimized at the mentioned level of theory, and their structures are presented in Figure 2. These derivatives contain substituents with various electron-donating/electron-withdrawing effects. The bond lengths and angles are provided in Tables S5 and S6. The investigated coumarin structures are planar with elongated delocalization through an aromatic ring and strong donation from present carbonyl groups; therefore, a significant influence of substituents to structural parameters is not expected. This was proven when bond lengths for the common parts of molecules were compared (to 6-Br) and the MAE values were calculated. These values were 0.002 (6-OH) and 0.003 Å (6-NO 2 and 6-OMe), equal to the experimental uncertainty of the crystallographic analysis. The same applies to the bond angles with MAEs equal to 0.2 (6-NO 2 and 6-OH) and 0.3 • (6-OMe). The bond distance between carbon and heteroatom of the substituent is the highest in the case of 6-Br (1.915 Å). When OH and OMe groups are present, the bond lengths are almost equal, namely 1.365 (6-OH) and 1.359 Å (6-OMe). The planarity of substituents with the aromatic ring is preserved in the case of all substituents, with the angles on both sides being almost equal, which allows easy electron donation through positive resonant effect (in the case of Br, OH, and OMe) or electron withdrawal (in case of NO 2 ). These effects are additionally investigated within NBO and QTAIM analyses in the following paragraph.
The intramolecular interactions governing the stability of compounds can be quantified through the second-order perturbation theory. Some of the most prominent stabilization interactions are listed in Table S7. Due to the presence of aromatic rings and carbonyl groups within a structure, four derivatives share some common stabilization interactions. The most numerous stabilization interactions are formed within an aromatic ring, denoted as π(C-C)→π*(C-O1), with a range of energies between 55 and 150 kJ mol −1 (Table S8). This type of interaction with similar strength is present in the pyrone ring. This ring is additionally stabilized by the interactions between groups, including oxygen atoms. In these interactions, the donating group can be C=C through π(C3-C4)→π*(C2-O) interaction with energies between 97 and 109 kJ mol −1 . The lone pair on the oxygen atom can be a donating group through LP(O)→π*(C2-O) and LP(O)→π*(C9-C10), with energies higher than 120 kJ mol −1 in most cases. The carbonyl oxygen of the pyrone ring can also act as donating group with similar interaction energies. High stabilization energies were also obtained in the ester group of the substituent. As expected, the interactions within the ester group had stabilization energies of 120 kJ mol −1 . These oxygen atoms also stabilize the C4−C10 bond that connects the substituent and pyrone ring. Based on these values, it can be concluded that the coumarin core is very stable due to the extended delocalization between groups. Similar results were observed for other coumarin derivatives [12,29]. The main difference in the stability of the four derivatives comes from the stabilization interactions that include substituents. The neighboring C−C bonds interact with the C−substituent bond. These interactions can be weak with stabilization energy of only 5 kJ mol −1 in the case of the electron-withdrawing NO 2 group (Table S7). Stronger interactions were observed in the case of a weak electron donor, Br, with stabilization energies of 12 kJ mol −1 . An oxygen atom in OH and OMe groups acts as a strong electron-donor, and these stabilization interactions have energies between 23 and 42 kJ mol −1 . Bromine also interacts through a lone pair with neighboring bonds, LP(Br)→π*(C5-C6) (40 kJ mol −1 ) and LP(Br)→π*(C6-C7) (14 kJ mol −1 ). Nitrogen and oxygen atoms of the NO 2 group additionally stabilize other groups within a nitro group through strong interactions, such as LP(O)→π*(N-O) (540 kJ mol −1 ). Once OH and OMe are present as substituents, donation from substituents to the rest of the molecules increases. These energy values prove the assumption that extended delocalization is enhanced strongly in the case of the last two groups but also explain the co-planarity of all substituents with an aromatic ring, observed in the previous section.
QTAIM analysis was performed to investigate the effect of substituents on the neighboring groups. As it was previously shown that these substituents do not significantly affect groups that are further away, they are not included in the discussion. Table S8 lists AIM parameters for the selected bonds. Within structures of derivatives, there are several types of bonds. The first type includes carbon bonds of aromatic rings. These bonds are characterized by an electron density of 0.3 a.u. and Laplacian between −0.84 and −0.89 a.u, making them the strongest coumarin backbone bonds. A single bond between C1 and C2 has a lower electron density value (0.25-0.27 a.u.). The double bond between carbon and oxygen atoms has an electron density of 0.41 a.u. and Laplacian −0.24 a.u. The C−Br bond is the weakest when bonds with substituents are concerned (electron density equal to 0.15 a.u. and Laplacian equal to −0.14 a.u.). Bonds between carbon atoms and OH/OMe are almost equal in strength and do not depend on the group attached to the oxygen atom. These bonds have a higher electron density value than those between a carbon atom and a nitro group (electron density of 0.26 a.u.). These values nicely follow the possibility of the electron density exchange between the aromatic core and substituent. The positive and negative resonant effects between substituents and the aromatic core are essential for stabilizing and distributing electron density. It is also important to observe that the present substituents do not significantly influence the carbon-carbon bonds surrounding position C6. This proves the assumption that the electron density is exchanged between the coumarin core as a whole with the substituents and that the overall stability is preserved. On the other hand, the choice of substituents can influence the possible interactions with the proteins and free radicals, as investigated in the last section.
Experimental and Theoretical NMR Spectra
The experimental NMR spectra of coumarin derivatives were recorded in DMSO as a solvent and shown in Figures S1-S8. The theoretical chemical shifts were calculated for the structures optimized in DMSO at B3LYP-D3BJ/6-311++G(d,p) level of theory. The structure of TMS was optimized at the same level of theory, and calculated chemical shifts are shown relative to chemical shifts of hydrogen and carbon atoms of TMS to mimic the experimental conditions. The experimental and theoretical values of chemical shifts are presented in Table 2 (6-Br as an example) and Tables S9-S11. The notation of carbon atoms follows a scheme shown in Figure 2. These sets were compared by calculating the correlation coefficient and MAE values, as previously defined. The calculated 13 C NMR chemical shift values were systematically overestimated because of the explicit solvent effect, and the correction factor (0.95) was determined from the dependency between experimental and theoretical values. The 1 H NMR spectrum of investigated coumarin derivatives is relatively simple due to the rigid structure of the parent molecule that consists of aromatic and pyrone rings. The 1 H NMR spectrum of 6-Br contains a singlet at 3.85 ppm assigned to the methyl group of the aliphatic chain. The corresponding peak in the theoretical spectrum is located at 3.88 ppm. The hydrogen atoms attached to aromatic carbon atoms have the following chemical shifts 7.40, 7.90, and 8.20 ppm. The hydrogen atom attached to a carbon atom of the pyrone ring also has a high chemical shift value (8.70 ppm). The theoretical values of 1 H NMR differ on average for 0.1 ppm with a high correlation coefficient. The values of chemical shifts are higher than expected due to the proximity of oxygen atoms and the sp 2 hybridization of neighboring carbon atoms. Chemical shifts of hydrogen atoms within the other three coumarin derivatives (Tables S9-S11) depend slightly on the electronic nature of the substituent. In the case of nitro group substituent, the chemical shift values are somewhat higher than in the case of 6-Br (3.94 vs. 3.85 ppm for methyl group, etc.). The chemical shifts of hydrogen atoms within 6-OH and 6-OMe have almost unchanged values when compared to 6-Br. Due to the oxygen atom's proximity, the OMe substituent's hydrogen atoms have chemical shift values equal to 3.76 in the form of a singlet. In all three cases, the correlation coefficients are higher than 0.999, with MAE values between 0.10 and 0.13 ppm.
The 13 C NMR spectra allow better comparison between experimental and theoretical values because singlets are exclusively observed. The lowest chemical shift value in the 13 C NMR spectrum of 6-Br was obtained for the carbon atom of a methyl group (53 ppm in the experimental and 55 ppm in the theoretical spectrum). Carbon atoms in positions 3, 8, and 10 have chemical shifts around 120 ppm, an expected range for the aromatic carbon atoms. Once the electronegative oxygen atoms are present adjacent to carbon atoms, their chemical shifts increase to 132 ppm. Br as a substituent leads to a chemical shift of 137 ppm for the neighboring carbon atom in the experimental and theoretical spectra. The largest chemical shift value was obtained for the carbon atom of the ester group (C1'), namely 163 ppm in the observed spectrum and 167 ppm in the theoretical spectrum. The correlation between measured and calculated values was 0.995, with MAE equal to 2 ppm. The chemical shift of the carbon atom to which the NO 2 group is linked is 129 ppm (132 ppm in the theoretical spectrum), whereas the chemical shifts of the neighboring carbons C5 and C7 are lowered by several ppm. The rest of the values are almost identical. The same applies to 6-OH and 6-OMe coumarin derivatives. The correlation coefficients in these three cases are higher than 0.995, with MAE values of 2 ppm. These results prove that the selected level of theory is suitable for the investigated systems. In the following two sections, the comparison between experimental and theoretical IR and UV-VIS spectra is performed.
Experimental and Theoretical IR Spectra
The IR spectra of four coumarin derivatives were obtained for the compounds in the solid phase within the KBr pallet between 4000 and 400 cm −1 . The theoretical spectra were calculated for isolated compounds in a vacuum and visualized in the GaussView 6.
The theoretical values reproduced the experimental ones well, as observed in the most prominent peaks belonging to C=O stretching vibrations, which was additional proof that the selected level of theory was suitable for this class of compounds. The experimental and theoretical spectra of all derivatives are shown in Figure 3. The analysis of IR spectra is separated into three regions, with the most prominent bands explained. The first region is between 4000 and 2900 cm −1 and includes different X−H (X= C and O) stretching vibrations. In the case of 6-Br, this region is dominated by the broad peak of Caromatic−H stretching vibrations at 3200 cm −1 in the experimental and 3138 cm −1 in the theoretical spectrum. The weaker band is at 3043 cm −1 , attributed to the C−H stretching vibra- The first region is between 4000 and 2900 cm −1 and includes different X−H (X= C and O) stretching vibrations. In the case of 6-Br, this region is dominated by the broad peak of C aromatic −H stretching vibrations at 3200 cm −1 in the experimental and 3138 cm −1 in the theoretical spectrum. The weaker band is at 3043 cm −1 , attributed to the C−H stretching vibrations of a methyl group. This wavenumber is somewhat higher than expected due to the oxygen atom directly attached to the methyl group [28]. The same peaks can be observed for the other three derivatives (Figure 3). A noticeable difference exists in the experimental IR spectrum of 6-OH with a broad peak assigned to O−H stretching vibration at 3242 cm −1 . This band is a weak peak at 3400 cm −1 in the theoretical spectrum. The bathochromic shift in the experimental spectrum is due to the physical state of the sample and the possible formation of hydrogen bonds between molecules of investigated compounds.
The second part of the spectrum, between 1800 and 1000 cm −1 , also contains several prominent peaks of different vibrational modes. Two peaks at 1751 and 1698 cm −1 in the experimental spectrum of 6-Br belong to the C=O stretching vibrations of two carbonyl groups. The first peak is assigned to the carbonyl group of the pyrone moiety. These two peaks are located at 1759 and 1723 cm −1 in the theoretical spectrum. This difference in values is acceptable for the calculated values bearing in mind that the spectra were predicted for the compounds in a vacuum [28]. The effect of electron-donating/withdrawing substituents can be followed on these bands in the theoretical spectrum. In the case of 6-NO 2 , there is a hypsochromic shift in the position of these two bands (1781 and 1726 cm −1 ) due to the electron-withdrawing nature of the substituent. A much stronger influence is observed on the position of the C=O group of the pyrone ring that is part of the molecule backbone. The effect on the alkyl chain in position 3 is much weaker. As previously discussed, the electron-donating effect of OH and OMe groups increases the electron density within a molecule and leads to structural relaxation. The excess electron density leads to the bathochromic shift in wavenumber values. The peaks of C=O stretching vibration in the case of 6-OH (1749 and 1714 cm −1 ) and 6-OMe (1742 and 1722 cm −1 ) are bathochromically shifted. This region contains C−O stretching vibrations due to the ester group of the substituent in position 3. Spectra of 6-OH and 6-OMe have a strong band at 1270 cm −1 attributed to the C aromatic −O stretching vibration. Due to the system's rigidity, this value shifts towards larger wavenumbers than C aliphatic −O stretching vibration. The nitro group has several characteristic vibrations [33,34]. The symmetric stretching vibration of the nitro group was located at 1340 cm −1 in the experimental spectrum and 1349 cm −1 in the theoretical spectrum of 6-NO 2 as strong bands. The asymmetric stretching vibration of the nitro group is positioned at higher wavenumbers (1537 cm −1 in experimental and 1541 cm −1 in theoretical spectra).
The spectrum between 1000 and 500 cm −1 mainly includes medium to low-intensity bands of bending, torsion, and out-of-plane vibrations [12,35]. When different substituents are concerned, this region has several important bands. The C−Br stretching vibration is located at 660 cm −1 in the experimental and 665 cm −1 in the theoretical spectrum of 6-Br. At 845 cm −1 , a deformation vibration of the nitro group can be observed in the spectrum of 6-NO 2 [33]. The position of this band is shifted towards higher wavenumbers due to the proximity of the aromatic ring. Additionally, there are rocking vibrations of the nitro group at 532 cm −1 . All of these values are well-reproduced in the experimental spectrum.
Experimental and Theoretical UV-Vis Spectra
The experimental UV-Vis spectra of four derivatives were recorded in ethanol between 800 and 200 nm. The theoretical UV-Vis spectra were calculated for the structures reoptimized in ethanol using the CPCM model. The experimental spectra of derivatives are shown in Figure 4. The electronic spectrum of 6-Br is characterized by three broad peaks between 220 and 380 nm (347, 291, and 228 nm). The first peak is attributed to n→π transition, while the other two can be assigned as π→π transitions. As previously stated, Br influences the rest of the molecule through a weak resonance effect. The peaks shift towards longer wavelengths once this atom is exchanged with the OH group, as Figure 4 shows, due to the strong positive resonant effect. The first peak shifts to 368 nm, while the second shifts to 300 nm. The methoxy group also donates electron density to the rest of the molecule through a strong positive inductive effect. The shifts are lower than in the case of the OH group. When the NO 2 group is present in a molecule, the dominant effect is electron withdrawal from the other parts of the molecule, leading to the hypsochromic shift, and the peaks are positioned at 331 and 271 nm. These electronic spectra changes nicely follow the expected electron donating/withdrawing effects of substituents [24]. The addition of various substituents can be important for the synthesis of novel fluorophores [36]. The theoretical spectra of four derivatives were predicted at the B3LYP-D3BJ/6-311++G(d,p) level of theory upon optimization in ethanol to mimic the experimental conditions ( Figure 4). The longest wavelength peak in the spectrum of 6-Br is located at 351 nm. This transition is assigned to the HOMO→LUMO transition (96%) with an oscillator strength of 0.1344 (Table S12). The second and third prominent transitions are at 298 and 230 nm, assigned to HOMO-1→LUMO (85%) and HOMO→LUMO+1 (38%)/HOMO→LUMO+2(45%), respectively. The relative intensity order of the mentioned transitions is the same as for the experimental peaks. These results prove that the theoretical spectrum reproduces the experiment well. The differences of several nm can be explained by the solvent effect and formation of specific solvent-solute interactions, which are not included in the used solvent model. The theoretical assignments for the other three compounds are presented in Table S12. The calculated values of electronic transitions are well correlated with the experimental ones, especially in the case of 6-OH and 6-OMe, with differences of several nm. As explained previously, the computed electronic transition values shift predictably, which is in line with the NBO and QTAIM analyses of the intramolecular effects. In the case of 6-NO2, a broad peak at 271 in the experimental spectrum probably presents an overlap between several transitions to energetically similar excited states due to the n orbitals of heteroatoms of the NO2 group ( Figure 4). The other peak is at 331 nm in the experimental and 333 nm in the theoretical spectrum. This type of analysis again proved the applicability of the chosen level of theory. The theoretical spectra of four derivatives were predicted at the B3LYP-D3BJ/6-311++G(d,p) level of theory upon optimization in ethanol to mimic the experimental conditions ( Figure 4). The longest wavelength peak in the spectrum of 6-Br is located at 351 nm. This transition is assigned to the HOMO→LUMO transition (96%) with an oscillator strength of 0.1344 (Table S12). The second and third prominent transitions are at 298 and 230 nm, assigned to HOMO-1→LUMO (85%) and HOMO→LUMO+1 (38%)/HOMO→LUMO+2(45%), respectively. The relative intensity order of the mentioned transitions is the same as for the experimental peaks. These results prove that the theoretical spectrum reproduces the experiment well. The differences of several nm can be explained by the solvent effect and formation of specific solvent-solute interactions, which are not included in the used solvent model. The theoretical assignments for the other three compounds are presented in Table S12. The calculated values of electronic transitions are well correlated with the experimental ones, especially in the case of 6-OH and 6-OMe, with differences of several nm. As explained previously, the computed electronic transition values shift predictably, which is in line with the NBO and QTAIM analyses of the intramolecular effects. In the case of 6-NO 2 , a broad peak at 271 in the experimental spectrum probably presents an overlap between several transitions to energetically similar excited states due to the n orbitals of heteroatoms of the NO 2 group (Figure 4). The other peak is at 331 nm in the experimental and 333 nm in the theoretical spectrum. This type of analysis again proved the applicability of the chosen level of theory.
EPR Measurements of Antioxidant Activity
It has been previously shown that coumarin derivatives are effective radical scavengers in biological systems and through advanced oxidation processes in wastewater management [37][38][39]. The reactivity of chosen coumarin derivatives, at 10 −5 M final concentration, towards HO • was followed by the EPR spectroscopy through a relative decrease in the DEPMPO/HO • signal. The EPR spectra before and after adding coumarin derivatives are shown in Figure 5 below. the DEPMPO/HO • signal. The EPR spectra before and after adding coumarin derivatives are shown in Figure 5 below. Figure 5 shows that the signal of DEPMPO/HO • decreases upon the addition of investigated coumarin derivatives, proving their anti-radical activity. The radical scavenging activities calculated as reduction percentages are 23% (6-OMe), 16% (6-OH), 15% (6-NO2), and 13% (6-Br). The similarity in these values was expected as selected derivatives do not contain groups usually responsible for anti-radical activity, except in the case of 6-OH. Based on these values, the investigated compounds can be considered moderate scavengers compared to standard antioxidants such as fisetin, baicalein, quercetin, morin, and kaempferol, which reduce the signal of the DEPMPO/HO • adduct between 30 and 43% [40]. The activity of these compounds was lower than that of 4-hydroxycoumarin under the same experimental conditions [38]. The relative order of anti-radical activity can be explained by the electron-donating/withdrawing effects of substituents. Two coumarin derivatives with substituents characterized by the strongest electron donation have higher scavenging activity values, thus proving the importance of electron delocalization for activity. The possible mechanism of activity can be postulated to radical adduct formation, as there are no substituents present in a structure that commonly donate hydrogen atoms/protons to free radicals [38]. Once the groups with dominant negative inductive and Figure 5 shows that the signal of DEPMPO/HO • decreases upon the addition of investigated coumarin derivatives, proving their anti-radical activity. The radical scavenging activities calculated as reduction percentages are 23% (6-OMe), 16% (6-OH), 15% (6-NO 2 ), and 13% (6-Br). The similarity in these values was expected as selected derivatives do not contain groups usually responsible for anti-radical activity, except in the case of 6-OH. Based on these values, the investigated compounds can be considered moderate scavengers compared to standard antioxidants such as fisetin, baicalein, quercetin, morin, and kaempferol, which reduce the signal of the DEPMPO/HO • adduct between 30 and 43% [40]. The activity of these compounds was lower than that of 4-hydroxycoumarin under the same experimental conditions [38]. The relative order of anti-radical activity can be explained by the electron-donating/withdrawing effects of substituents. Two coumarin derivatives with substituents characterized by the strongest electron donation have higher scavenging activity values, thus proving the importance of electron delocalization for activity. The possible mechanism of activity can be postulated to radical adduct formation, as there are no substituents present in a structure that commonly donate hydrogen atoms/protons to free radicals [38]. Once the groups with dominant negative inductive and negative resonant effects are present, the aromatic part of the molecule is more destabilized by radical adduct formation. These findings align with the previous discussion on the impact of electron effect on spectra and reactivity.
Spectrofluorometric and Molecular Docking Investigation of Binding to BSA
The binding of compounds to BSA influences the possibility of their distribution throughout the organism. In this contribution, the binding process was investigated experimentally and through molecular docking simulations to examine the effect of substituents on the binding energy and interactions with surrounding amino acids in the active pocket. The spectra of BSA before and after the addition of compounds at three different temperatures are provided in and Figure 6 and Figures S9-S11. The double-log Stern-Volmer relationship was used to calculate binding constants and obtain the thermodynamic parameters of binding through their change with temperature. A temperature range between 27 and 37 • C was selected to cover normal body temperature so the conclusion on interactions under physiological conditions can be obtained. The paper shows the spectra of 6-Br binding to BSA as a representative example. Table 3 lists all four coumarin derivatives' binding constants and thermodynamic parameters.
Spectrofluorometric and Molecular Docking Investigation of Binding to BSA
The binding of compounds to BSA influences the possibility of their distribution throughout the organism. In this contribution, the binding process was investigated experimentally and through molecular docking simulations to examine the effect of substituents on the binding energy and interactions with surrounding amino acids in the active pocket. The spectra of BSA before and after the addition of compounds at three different temperatures are provided in Figures 6 and S9-S11. The double-log Stern-Volmer relationship was used to calculate binding constants and obtain the thermodynamic parameters of binding through their change with temperature. A temperature range between 27 and 37 °C was selected to cover normal body temperature so the conclusion on interactions under physiological conditions can be obtained. The paper shows the spectra of 6-Br binding to BSA as a representative example. Table 3 lists all four coumarin derivatives' binding constants and thermodynamic parameters. As observed in and Figure 6 and Figures S9-S11, fluorescence emission intensity decreased concentration-dependently upon adding coumarin derivatives. The correlation coefficients for the double-log Stern-Volmer plots are provided in Table 3; for all data sets, they are between 0.97 and 0.99. The decrease in fluorescence intensity is similar for all compounds except for 6-OMe. In this case, an isosbestic point can be seen in Figure S11, in addition to the appearance of a new peak at 480 nm. The explanation for this peak is probably due to the formation of covalent bonds between coumarin derivatives and amino acids surrounding the active positions, as 6-OMe itself is not fluorescent. These changes in spectra were not further investigated, although it would be essential to understand the possible underlying mechanism.
The dependence of lnK to reciprocal temperature is linear for all compounds, which allowed the determination of the thermodynamic parameters of binding. In the case of 6-Br and 6-NO 2 , the dependence has a negative slope, leading to the positive value of ∆H bind , 556 and 360 kJ mol −1 , respectively. These values are −254 (6-OH) and −236 kJ mol −1 (6-OMe) for the other two derivatives. According to data shown in Table 3, the most spontaneous binding at 37 • C was between BSA and 6-NO 2 (−41.0 kJ mol −1 ), followed by 6-Br (−36.7 kJ mol −1 ). Somewhat lower ∆G bind values were calculated for 6-OH (−24.6 kJ mol −1 ) and 6-OMe (−23.7 kJ mol −1 ). The reason probably lies in the formed interactions with the surrounding amino acids in the active pocket, as investigated further. With the decrease in temperature, between 37 and 27 • C, the ∆G bind values decrease in the case of 6-NO 2 and 6-Br and increase for the other two derivatives. This leads to the conclusion that the dominant factor for binding is the change in reaction entropy. With the decrease in temperature, the movement of routable substituents such as OH and OMe decreases, allowing the formation of stronger interactions, which proves the importance of the substituent's flexibility for the binding process. Further experimental techniques for analyzing the changes in BSA confirmation include circular dichroism, gel electrophoresis, and viscosity measurements.
Molecular docking analysis was performed on human serum albumin (HSA), a protein with similar active pockets as BSA. HSA is more relevant for biological studies and was therefore selected for molecular docking analysis. As it is well known in the literature, HSA has several binding sites, with the main two being named "Sudlow I" and "Sudlow II," also named as "warfarin binding site" and "benzodiazepine/ibuprofen binding site", respectively. Sudlow site I is under subdomain IIA, whereas Sudlow site II is in subdomain IIIA. Sudlow site I preferentially bind large heterocyclic chemicals such as azapropazone, phenylbutazone, and warfarin (WF). Sudlow site II predominantly binds aromatic chemicals such as ibuprofen (IP). The binding affinities for the most stable conformations in both active pockets for all four coumarin derivatives are presented in Table 4. Warfarin and ibuprofen were docked into Sudlow I and II active pockets, along with investigated compounds. According to the results of molecular docking simulations presented in Table 4, the highest binding affinity was exhibited by compound 6-NO 2 (−31.4 kJ mol −1 in Sudlow 1 and −30.9 kJ mol −1 in Sudlow II), followed closely by the compounds 6-Br, 6-OH, and 6-OMe. The calculated range of values and the relative affinity order nicely follow the experimentally determined one. This also verifies the assumption that the molecular docking study with HSA can model the binding process to BSA. Interestingly, compounds with bromine and methoxy groups have slightly higher binding affinity at Sudlow II than the first active pocket (−30.1 vs. −28.5 kJ mol −1 for 6-Br, for example). In the case of the compound with a nitro group, higher affinity was calculated at Sudlow I. Compound 6-OH shows equal binding affinity towards both sites. The binding affinity of warfarin at Sudlow I is the highest among the investigated compounds (−34.3 kJ mol −1 ). The binding of Ibuprofen at Sudlow II is thermodynamically less favorable (−28.3 kJ mol −1 ) compared to 6-Br, 6-NO 2 , and 6-OH.
Binding energies from Table 4 indicate lower binding affinity of investigated compounds than WF, which can be explained by closer examination of protein-ligand interactions at the active site. As can be seen from Figure 7, the strongest interactions WF makes are four conventional hydrogen bonds with ARG257, TYR150, and ARG222. These amino acid residues represent the active site Sudlow I. Binding energies from Table 4 indicate lower binding affinity of investigated compounds than WF, which can be explained by closer examination of protein-ligand interactions at the active site. As can be seen from Figure 7, the strongest interactions WF makes are four conventional hydrogen bonds with ARG257, TYR150, and ARG222. These amino acid residues represent the active site Sudlow I. Data from Figure 8 suggest that 6-OMe forms hydrogen bonds with two of the three mentioned amino acid residues due to the highest similarity to ibuprofen. Additionally, 6-OMe forms a hydrogen bond with LYS199. Regarding π-π interactions, LEU230 and LEU238 are amino acids that interact with WF and 6-OMe. The highest number of conventional hydrogen bonds, alongside the WF, is built between the 6-NO2 and GLN459, LYS195, LYS432, and LYS432, respectively. However, 6-NO2 with TYR452 forms an unfavorable π-π interaction, reflected in binding energies higher than WF's, despite the same number of conventional hydrogen bonds. The number of conventional hydrogen bonds at Sudlow site II directly correlated to the binding energies and investigated compounds' overall binding potential. The compound with a nitro group formed three conventional hydrogen bonds with GLN459, LYS195, and LYS 436, respectively (Figure 8), consequently showing the lowest binding energies. It was followed closely by 6-Br, forming three conventional hydrogen bonds, two with ARG410 and one with LYS414. Ibuprofen and 6-OH formed two conventional hydrogen bonds, while 6-OMe showed no conventional hydrogen bonds. Data from Figure 8 suggest that 6-OMe forms hydrogen bonds with two of the three mentioned amino acid residues due to the highest similarity to ibuprofen. Additionally, 6-OMe forms a hydrogen bond with LYS199. Regarding π-π interactions, LEU230 and LEU238 are amino acids that interact with WF and 6-OMe. The highest number of conven-tional hydrogen bonds, alongside the WF, is built between the 6-NO 2 and GLN459, LYS195, LYS432, and LYS432, respectively. However, 6-NO 2 with TYR452 forms an unfavorable π-π interaction, reflected in binding energies higher than WF's, despite the same number of conventional hydrogen bonds. The number of conventional hydrogen bonds at Sudlow site II directly correlated to the binding energies and investigated compounds' overall binding potential. The compound with a nitro group formed three conventional hydrogen bonds with GLN459, LYS195, and LYS 436, respectively (Figure 8), consequently showing the lowest binding energies. It was followed closely by 6-Br, forming three conventional hydrogen bonds, two with ARG410 and one with LYS414. Ibuprofen and 6-OH formed two conventional hydrogen bonds, while 6-OMe showed no conventional hydrogen bonds. If further examined, data from Table 4 indicate the binding potential of the investigated compounds similar to the binding potential of IP in Sudlow site II. These structures will be further examined by molecular dynamics simulations. However, the overall binding affinity of all investigated compounds was good, indicating the high possibility of transport of investigated compounds by albumin throughout the organism.
Molecular Dynamics
Molecular dynamics simulations were performed to investigate further results obtained through molecular docking. According to the RMSD values in Figure S12, the binding of investigated compounds has a comparable effect on changes in RMSD values as ibuprofen. This indicates that the chemical behavior of the investigated compounds, in terms of altering the secondary structure of HSA, is comparable to that of ibuprofen. Additionally, RMSD values of HSA itself show no significant changes in the secondary structure, regardless of the bonded ligand. However, larger differences were observed when examining the RMSF and Rg values (Figures 9 and 10). According to the RMSF values, the stabilizing effect of binding 6-OH and 6-OMe on the flexibility of HSA amino acid residues was comparable but slightly higher than that of ibuprofen. An even greater stabilization of amino acid residues was observed when HSA was complexed with 6-Br. In contrast, the binding of 6-NO2 increased the flexibility of HSA amino acid residues compared to the binding of ibuprofen. If further examined, data from Table 4 indicate the binding potential of the investigated compounds similar to the binding potential of IP in Sudlow site II. These structures will be further examined by molecular dynamics simulations. However, the overall binding affinity of all investigated compounds was good, indicating the high possibility of transport of investigated compounds by albumin throughout the organism.
Molecular Dynamics
Molecular dynamics simulations were performed to investigate further results obtained through molecular docking. According to the RMSD values in Figure S12, the binding of investigated compounds has a comparable effect on changes in RMSD values as ibuprofen. This indicates that the chemical behavior of the investigated compounds, in terms of altering the secondary structure of HSA, is comparable to that of ibuprofen. Additionally, RMSD values of HSA itself show no significant changes in the secondary structure, regardless of the bonded ligand. However, larger differences were observed when examining the RMSF and Rg values (Figures 9 and 10). According to the RMSF values, the stabilizing effect of binding 6-OH and 6-OMe on the flexibility of HSA amino acid residues was comparable but slightly higher than that of ibuprofen. An even greater stabilization of amino acid residues was observed when HSA was complexed with 6-Br. In contrast, the binding of 6-NO 2 increased the flexibility of HSA amino acid residues compared to the binding of ibuprofen. Regarding the flexibility of amino acid residues in HSA in the absence of bound to its active sites, it was observed that the binding of all investigated com including ibuprofen, led to an increase in residue flexibility, except for 6-Br. comes to changes in the compactness of HSA, the binding of 6-Br and 6-OMe has effect as the binding of ibuprofen. On the other hand, the binding of 6-NO2, espe OH, reflects the increase in the radius of gyration, indicating changes in the com of the HSA in comparison to the HSA-ibuprofen complex. Concerning the HSA i binding of ibuprofen and 6-Br increases protein compactness, whereas the bind OMe and 6-NO2 does not induce any significant changes in protein compactness values are in reverse correlation with the binding energies and inhibitory const tained through molecular docking simulations. However, RMSF values cannot b Regarding the flexibility of amino acid residues in HSA in the absence of ligands bound to its active sites, it was observed that the binding of all investigated compounds, including ibuprofen, led to an increase in residue flexibility, except for 6-Br. When it comes to changes in the compactness of HSA, the binding of 6-Br and 6-OMe has a similar effect as the binding of ibuprofen. On the other hand, the binding of 6-NO 2 , especially 6-OH, reflects the increase in the radius of gyration, indicating changes in the compactness of the HSA in comparison to the HSA-ibuprofen complex. Concerning the HSA itself, the binding of ibuprofen and 6-Br increases protein compactness, whereas the binding of 6-OMe and 6-NO 2 does not induce any significant changes in protein compactness. The Rg values are in reverse correlation with the binding energies and inhibitory constants obtained through molecular docking simulations. However, RMSF values cannot be correlated with binding energies, but they can be correlated to the charge distribution throughout the molecule, which has, as a consequence, different types and numbers of interactions with HSA amino acid residues. For example, Br is a substituent with a positive resonant and negative inductive effect, and because the negative charge is fairly localized on the bromine atom, it forms several electrostatic interactions with amino acid residues ( Figure 6). This minimizes the flexibility of amino acid residues locally with lower RMSF values. The formation of strong hydrogen bonds between amino acids in the active pocket and OH, OMe, and NO 2 groups leads to a significant change in the secondary structure and fluctuations in the positions of atoms, which results in a higher average deviation of the positions.
The presented conclusions are important for biomedical applications as the defined substituent's effect allows fine-tuning of the activity through synthesizing compounds with desirable moieties. Small differences in electronic effect can lead to differences in interactions with proteins and other biomolecules and the scavenging of free radicals.
Chemicals
The investigated coumarin derivatives were obtained as previously described [41]. Solvents used for the other experiments were purchased from Merck (Merck & Co., Rahway, NJ, USA) as p.a. chemicals.
Spectroscopic Analysis
The IR spectra of investigated coumarin derivatives were recorded on a Thermo Nicolet-Avatar 370 FTIR spectrometer (Thermo Fisher, Waltham, MA, USA) in the range between 4000 and 400 cm −1 . The samples were prepared in the KBr pallet with the mass ratio of derivatives: KBr = 2 mg: 150 mg. The UV-VIS spectra were obtained in ethanol on the Thermo Scientific-Evolution 220 spectrophotometer (Thermo Fisher, Waltham, MA, USA) between 800 and 200 nm. The NMR spectra were measured on a Bruker AvanceTM 400 MHz spectrometer (Bruker, Billerica, MA, USA). Chemical shifts are shown relative to TMS.
Spectrofluorimetric Analysis of BSA Protein Binding
The affinity of coumarin derivatives towards bovine serum albumin (BSA) was investigated by the spectrofluorimetric measurements on a Cary Eclipse MY2048CG03 instrument (Agilent Technologies, Santa Clara, CA, USA). The excitation wavelength was set to 295 nm, corresponding to the excitation of tryptophan residues, and emission was followed between 310 and 500 nm. The scan rate was 600 nm min −1 , with both slits set to 5 nm. The concentration of BSA in phosphate-buffered saline was kept constant at 5 × 10 −5 M, while the concentration of coumarin derivatives changed between 0.1 and 1 × 10 −5 M. The measurements were performed at three temperatures (27,32, and 37 • C) to allow the calculation of the thermodynamic parameters that govern the binding process. The decrease in fluorescence emission intensity followed a double logarithmical Stern-Volmer quenching mechanism.
Electron Paramagnetic Resonance Spectroscopy (EPR) Analysis of Radical Scavenging Activity
The anti-HO • activity of compounds was measured by the electron paramagnetic resonance (EPR) spectroscopy on a Bruker Elexsys E540 EPR spectrometer (Bruker, Billerica, MA, USA) operating at X-band (9.51 GHz). The following parameters for the measurements were set: modulation amplitude-1G; modulation frequency-100 kHz; microwave power-10 mW, center field-3500 G. The spectra were recorded using Xepr software 3 (Bruker BioSpin, Billerica, MA , USA). The samples were drawn into 5 cm long gaspermeable Teflon tubes (Zeus Industries, Raritan, Franklin Township, NJ, USA) with a wall thickness of 0.025 mm and an internal diameter of 0.6 mm. The measurements were performed under normal conditions, using quartz capillaries into which Teflon tubes were placed. The radical was obtained in the Fenton system with the following concentrations: 5 mM H 2 O 2 , 5 mM FeSO 4 , and 100 mM spin-trap DEPMPO. The amount of radical was determined by the EPR signal after the formation of spin-adduct with DEPMPO. Due to the compound's insolubility in water, a 10 mM solution of coumarin derivatives was prepared in DMSO and diluted with water to 10 −4 M. The final concentration of coumarin derivatives in each measurement was 10 −5 M. The blank probe contained only the Fenton system with the same amount of DMSO as the other measurements. The radical scavenging activity of compounds was calculated from the peak heights as the relative decrease of the EPR signal of spin-adduct before and after the addition of compounds. The activity was calculated as the % of reduction = 100 × (I 0 − I a )/I 0 . The intensities in the previous equation denote the intensities of the second and third low-field EPR peaks of the control system and a sample containing coumarin derivatives, respectively.
Theoretical Methods
Structure optimizations were performed in the Gaussian 09 Program package Version 09 [42]. Three common functionals (B3LYP-D3BJ, APFD, and M06-2X) [43][44][45][46] in conjunction with the 6-311++G(d,p) [47] basis set were employed for the optimization of the structure. These calculations were conducted without any geometrical constraints. The absence of imaginary frequencies proved that the minima on the energy surface were obtained. The vibrational modes were visualized in the GaussView 6 program [48] and further investigated through the potential energy distribution (PED) analysis. The conductor-like polarizable continuum (CPCM) solvent model was employed to mimic the experimental conditions [49]. The electronic transitions were calculated by the time dependent-density functional theory (TD-DFT) in ethanol as the solvent. The 1 H and 13 C NMR spectra of differently substituted coumarin derivatives were obtained by the gauge independent orbital approach (GIAO) [50,51], which was implemented in the Gaussian09 Program package [42]. The calculated values of chemical shifts are presented relative to the signals of TMS, optimized at the same level of theory. The effects of substituents were analyzed in detail by the natural bond orbital (NBO) and quantum theory of atoms in molecules (QTAIM) methods. NBO [52] is used to access the energy of stabilization interactions that govern structure stability. On the other side, QTAIM is a complementary approach to investigate the intermolecular interactions based on the electron density and its Laplacian in the bond critical points (BCP) and ring critical points (RCP) [53]. This approach is based on Bader's theory of interacting atoms. These calculations were carried out in the AIMAll program package [54]. QTAIM recognizes two types of interactions: closed and open-shell interactions [53,55]. The first type includes covalent bonds with an electron density of around 0.1 a.u. and large negative Laplacian. The second type covers ionic bonds, van der Waals interactions, and hydrogen bonds. The electron density of these interactions is between 0.001 and 0.04 a.u., while Laplacian has a positive value.
Molecular Docking Analysis
Molecular docking studies were performed to investigate the binding affinity towards transport proteins and evaluate binding energies and inhibitory constants. The crystal structure of the protein used in the molecular docking study (human serum albumin (HSA)) was obtained from the RCSB Protein Data Bank with PDB ID:2BXD [56]. Water molecules, cofactors, and co-crystalized ligands were deleted, and protein was prepared for the simulation using BIOVIA Discovery Studio 4.0. [57] As previously mentioned, ligands were prepared for simulations by geometry optimization using the Gaussian09 software package. For performing molecular docking simulations, the AMDock software package with implemented AutoDock 4.2.6 was used [58]. The Kollman partial charges and polar hydrogens were added using the AutoDockTools interface. The flexibility of the ligands/complexes was considered during simulations, while the protein remained rigid. The Lamarckian genetic algorithm (LGA) was employed for protein-complex flexible docking. The following parameters were determined for the LGA method: there were a maximum of 250,000 energy evaluations, 27,000 generations, and mutation and crossover rates of 0.02 and 0.8, respectively. For the search of the active site and ligand orientation, AutoGridFR was utilized. AutoDock 4.2.6 was implemented for the molecular docking energy calculations using Amber Force Field [59][60][61]. The interactions between the target protein and the investigated compounds were analyzed and illustrated in 3D using BIOVIA Discovery Studio 4.0 and ADT.
Molecular Dynamics Analysis
Structures from the molecular docking which express the highest binding potential were subjected to molecular dynamics simulations (MD). The AMBER22 software package with implemented CHARM36m force field was used to perform MDs [62]. The Charmm-GUI server produced the investigated compounds' topologies, input parameters, and coordinate files [63]. The steepest descent and conjugate gradient algorithms were used to conduct minimization with a tolerance of up to 1000 kJ mol −1 nm −1 over 50,000 steps. The phase of equilibration was conducted under NVT ensemble settings. The MD production process was carried out in an NPT ensemble utilizing the SHAKE algorithm for a 100 ns time scale and implementing a Monte-Carlo barostat (P = ps). Additionally, from MD output trajectories, root mean square deviation (RMSD), radius of gyration (Rg), and root mean square fluctuation (RMSF) were calculated to examine system features during and after molecular dynamics simulations, including general stability and structural fluctuations. These parameters are used to assess the stability and structural changes of the proteinligand complex across the calculated timeframe [64,65].
Conclusions
The effects of electron-donating/electron-withdrawing substituents on spectral, structural, antioxidant, and protein binding activities were analyzed for four coumarin derivatives. The appropriate DFT level of theory (B3LYP-D3BJ/6-311++G(d,p)) was selected upon the comparison between experimental and theoretical bond lengths and angles of coumarin-3-carboxylic acid and 3-acetylcoumarin as structurally similar compounds. The comparison between structural parameters of four coumarin derivatives showed a resemblance between these structures, with the only difference being the bond length between a carbon atom and a substituent. The main difference in NBO stabilization interactions came from the interaction between lone pair of atoms in substituent (oxygen, nitrogen, and bromine) and surrounding carbon-carbon bonds. These values were nicely followed by QTAIM parameters. The experimental NMR chemical shifts were compared to the calculated ones. The 1H NMR chemical shifts' correlation coefficient was 0.999 in all cases, with MAE values between 0.1 and 0.13 ppm. Regarding 13 C NMR chemical shifts, high correlation coefficients and low MAE values were also obtained. The most prominent IR bands were well reproduced, especially for the substituents. UV-VIS spectra were assigned based on theoretical values, with the difference in several nanometers attributed to the explicit solvent effect that was not included in the used solvent model. The percentage of reduction in HO • also depended on the present substituent (23% (6-OMe), 16% (6-OH), 15% (6-NO 2 ), and 13% (6-Br)), marking the investigated compounds as moderate radical scavengers. All coumarin derivatives interacted spontaneously with BSA, as shown by spectrofluorometry. The binding affinity decreased at body temperature in order 6-NO 2 > 6-Br > 6-OH > 6-OMe. This order was reproduced by molecular docking simulations. All compounds bind more tightly to HSA than ibuprofen, except for 6-OMe. The binding affinity of warfarin was higher than for the investigated compounds. The effect of substituent was also shown in MD simulations on the compactness and flexibility of the protein. These results significantly added to the understanding of the electron effect of substituents on biological activities, and further studies are needed to include other typical substituents. | 12,961.2 | 2023-07-01T00:00:00.000 | [
"Chemistry"
] |
Correlation between Uniaxial Compression Test and Ultrasonic Pulse Rate in Cement with Different Pozzolanic Additions
This work aims to study the relationship between the compression resistance and velocity from ultrasonic pulses in samples of mortars with 25% of pozzolanic content. Pozzolanic cement is a low-priced sustainable material that can reduce costs and CO2 emissions that are produced in the manufacturing of cement from the calcination of calcium carbonate. Using ultrasonic pulse velocity (UPV) to estimate the compressive resistance of mortars with pozzolanic content reduces costs when evaluating the quality of structures built with this material since it is not required to perform an unconfined compression test. The objective of this study is to establish a correlation in order to estimate the compression resistance of this material from its ultrasonic pulse velocity. For this purpose, we studied a total of 16 cement samples, including those with additions of pozzolanic content with different compositions and a sample without any additions. The results obtained show the mentioned correlation, which establishes a basis for research with a higher number of samples to ascertain if it holds true at greater curing ages.
Introduction
The construction industry currently confronts significant challenges, such as the growth of increasingly populated cities, the rise in pollution levels in urban centers, and the need to guarantee housing for the population. These challenges require a transformation to allow for the use of sustainable construction materials at an optimum cost while fulfilling the established quality criteria.
Cement is the main component of mortar and concrete, which are among the most widely used materials in construction. It is estimated that cement manufacturers are responsible for 7% of annual CO 2 emissions. Using supplementary cementitious materials (SCMs) in the manufacturing of cement, such as pozzolanic materials, could significantly reduce these emissions as well as costs due to their low price. This article studies the most representative pozzolanic materials from a geological perspective of the different deposits existing in the north-eastern part of Cuba. In Cuba, there are difficulties with supply and access to building materials. Mortars with pozzolanic additions from local deposits could provide an economical and sustainable option.
Another great challenge for the industry is to analyze the quality of the structures once they have been built. Destructive testing, which is time-consuming and expensive, is traditionally used to verify that there is no damage to the infrastructure. Non-destructive testing (NDT) can evaluate the quality without altering the original attributes or damaging the material.
Non-destructive testing includes techniques based on ultrasound (UT). These techniques have a wide range of applications in the characterization and evaluation of materials [1] in different sectors, such as the aerospace industry, manufacturing industries, and civil works and infrastructures [2].
One of the applications of ultrasound testing is the evaluation of the compressive strength of mortars and concrete based on ultrasonic pulses velocity (UPV). UPV tests are an easily applied non-destructive method for calculating of the speed of propagation of ultrasonic waves. These values can be correlated with the compressive strength of the mortars and concrete as measured with direct compression tests. In general, the compressive strength and the velocity measured have a correlation, where an increase in velocity results in high values of compressive strength [3].
Many factors influence the models established to predict the compressive strength of concrete from UPV. Some of these factors include the nature of the cement, the influence of aggregate size, the water-cement ratio, and the use of active admixtures [4]. For example, Abo-Qudais [5] indicated that the UPV decreases with the increase of the water-cement ratio and increases with an increasing aggregate size. Güçlüer K. [6] found that increasing the surface roughness of the aggregates led to an increase in the compressive strength of concrete and the speed of the ultrasonic pulses.
Another factor to consider is the influence of the concrete mesostructure on wave propagation. Recent research [7] has shown that the complex heterogeneous mesostructure of a concrete slab affects the displacements associated with wave motion, leading to inaccuracies in the wave propagation model. The discrepancy between the actual and theoretical values increases with the proportion of the aggregates and the particle size of the aggregates.
Research has been carried out to determine the relationship between compressive strength and UPV in different materials derived from cement. Benaicha et al. [8] showed that ultrasonic speed can be used to study the quality of concrete reinforced with steel fibres and is an effective way of evaluating the level of consolidation during and after the curing period. Stawiski and Kania [9] analyzed the ultrasonic test methodology to determine the conversion factor for compressive strength in samples of various dimensions. Alexandre Bogas et al. [3] studied concretes with different types of aggregates, achieving a simplified expression to estimate the compressive strength from UPV that was not affected by the kind of concrete and its composition. Nash't et al. [10] investigated whether it is possible to obtain a relationship between compression resistance and ultrasonic speed in concrete samples where the design characteristics are unknown.
The relationship between the ultrasonic pulse velocity and the elastic characteristics of the material are, according to the following equation: where c is speed of propagation, E is Young's modulus, e is the material density and µ is the Poisson coefficient.
Since there is a correlation between Young's modulus of a material and its resistance to compression, there will also be a correlation between this resistance and UPV, if the rest of the magnitudes involved are assumed to be invariable [11].
Many researchers like Kheder [12], Nash't et al. [10], Hamid et al. [13] and Omer et al. [14] have shown that the relationship between compressive strength and pulse rate can be estimated by the following exponential relationship: where Fc is the compressive strength (MPa), UPV is the ultrasonic pulse rate (km/s), and A and B are empirical constants. However, few studies have been conducted on the relationship between UPV and compressive strength in cement, mortars, or pozzolanic concrete. Madandoust and Mousavi [15] designed fifteen mixtures of concrete with different metakaolin contents and water-binder ratios (W-B). The hardened properties were tested for compressive strength and ultrasonic pulse rate (UPV), among other tests, and it was determined that compressive strength can be predicted based on UPV using multiple regression analysis. Rao et al. [16] studied a roller-compacted concrete pavement (RCCP) material with different Class F fly ash content as a mineral admixture with different sands as fine aggregate, all tested at different curing ages. Through the results of UPV testing and compressive strength, he proposed the following relationship between both parameters: The use of natural additions with different chemical compositions alters the behavior of the mixtures produced compared to materials without additions, with this behavior being more significant at greater curing ages, as can be seen in the study of Ranjbar et al. [17], who studied self-compacted concrete with natural zeolite. There is no consensus or unified formula for using UPV to predict compressive strength, despite the great benefits this would provide, since many factors influence the correlation between UPV and traditional compression tests. Furthermore, when we aim to apply this prediction technique to mortars or concrete with natural additions, such as pozzolans, we find little literature on the subject.
This research aims to study the correlation between UPV and resistance to compression in mortar samples with natural pozzolanic additions to examine whether this methodology can be applied to these materials and thus initiate a line of research to determine if a unified formula can be developed to calculate compressive strength for mortars with these additions. The aim of this manuscript is not to suggest a generalized correlation applicable to other materials, but to obtain results for the most significant pozzolanic materials within the deposits shown to have the most appropriate physicomechanical properties for use in high-strength mortars.
With the implementation of this methodology, it is possible to reduce the cost of quality inspections, making them accessible to the entire population.
Materials and Methods
To carry out the research, 160 × 40 × 40 mm prismatic specimens of mortars were tested, consisting of 400 g of a 75% mixture of cement type II 42.5 R and 25% of different materials with pozzolanic characteristics, 225 g of filtered water (0.56 w/c), and 1350 g of standard CEN (European Committee for Standardization) sand. In addition, a test specimen made from the reference cement was tested (400 g of cement type II 42.5R, 1350 g of standard CEN sand, and 225 g of filtered water).
The pozzolanic specimens were composed using eleven natural materials. Four of the materials were natural mordenite from a deposit located in Almeria, Spain. The other seven were different materials of volcanic origin coming from different deposits in the north-eastern area of Cuba. Table 1 shows the list of pozzolanic samples used in this investigation. The preparation of the mortar specimens was carried out according to the European Standard [18]. The preparation of the specimens started with a grinding process using two jaw crushers and a disc mill until a Blaine specific surface area of 4000 ± 200 cm 2 /g was obtained to match the specific surface of the cement used. Once the samples were the right size, 100 g of each sample was mixed with 300 g of cement to obtain the ratio of 75-25 cement-pozzolan. The mortar was mixed with an Ibertest mixer. First, 225 g of water was poured into the receptacle and the cement-pozzolana mixture was added. While the mixer was running, sand was added regularly. When the mortar was fixed, three samples (A-B-C) of each mixture were molded and then compacted using a Suzpecar compactor. Finally, the specimens were covered and placed in the wet chamber for 24 h, after which the specimens were unmolded and quickly immersed in water at 20.0 ± 1.0 • C in the curing tanks until they were tested. A total of 48 mortar specimens were made, 45 from type II 42.5 R cement with 25% additions from 11 different pozzolanic samples (specimens A-B-C from 1 to 15) and three made from type II 42.5 R cement without additions (test tubes 16 A-B-C).
A CONTROLS-UPV E48, together with Ultraschall-Gel Sauerland as a contact gel, determined the propagation time of the ultrasonic waves, as shown in the Figure 1. Volcanic glass Guaramanao The preparation of the mortar specimens was carried out according to the European Standard [18]. The preparation of the specimens started with a grinding process using two jaw crushers and a disc mill until a Blaine specific surface area of 4000 ± 200 cm 2 /g was obtained to match the specific surface of the cement used. Once the samples were the right size, 100 g of each sample was mixed with 300 g of cement to obtain the ratio of 75-25 cement-pozzolan. The mortar was mixed with an Ibertest mixer. First, 225 g of water was poured into the receptacle and the cement-pozzolana mixture was added. While the mixer was running, sand was added regularly. When the mortar was fixed, three samples (A-B-C) of each mixture were molded and then compacted using a Suzpecar compactor. Finally, the specimens were covered and placed in the wet chamber for 24 h, after which the specimens were unmolded and quickly immersed in water at 20.0 ± 1.0 °C in the curing tanks until they were tested. A total of 48 mortar specimens were made, 45 from type II 42.5 R cement with 25% additions from 11 different pozzolanic samples (specimens A-B-C from 1 to 15) and three made from type II 42.5 R cement without additions (test tubes 16 A-B-C).
A CONTROLS-UPV E48, together with Ultraschall-Gel Sauerland as a contact gel, determined the propagation time of the ultrasonic waves, as shown in the Figure 1. The determination of the propagation time and ultrasonic pulse velocity was carried out following the European Standard [19]. UPV was calculated according to the equation: To obtain the compressive strength of the specimens, an ELE/SDE compression press was used, along with an Ibertest compression device with a uniform load increase of 2400 ± 200 N/s, during the whole application time until breakage, following the indications of the European Standard [16].
We started with the bending resistance test, as indicated in the standard, to obtain the semi-prisms that would be tested in the compression test. Each semi-prism of the specimens was placed in the test cell of the compression press, which applied an incremental force until the specimen broke.
For the analysis of the results, a statistical study was carried out. The trend curve that best fits the results was obtained from an exponential and lineal regression analysis by the method of least squares using the Excel program. Absolute and relative errors were calculated, and the upper and lower limits of reliability established based on a confidence interval. The determination of the propagation time and ultrasonic pulse velocity was carried out following the European Standard [19]. UPV was calculated according to the equation: To obtain the compressive strength of the specimens, an ELE/SDE compression press was used, along with an Ibertest compression device with a uniform load increase of 2400 ± 200 N/s, during the whole application time until breakage, following the indications of the European Standard [16].
We started with the bending resistance test, as indicated in the standard, to obtain the semi-prisms that would be tested in the compression test. Each semi-prism of the specimens was placed in the test cell of the compression press, which applied an incremental force until the specimen broke.
For the analysis of the results, a statistical study was carried out. The trend curve that best fits the results was obtained from an exponential and lineal regression analysis by the method of least squares using the Excel program. Absolute and relative errors were calculated, and the upper and lower limits of reliability established based on a confidence interval.
Results
A total of 48 prismatic mortar specimens were tested in the laboratory. The ultrasonic pulse velocity and compressive strength of each were obtained, which are shown in Table S1 (supplementary material). Table 2 shows the average value of compressive strength for each specimen, the measurement of the propagation time, and the ultrasonic pulse velocity. From the statistical analysis of exponential regression of the results, the equation to determine the trend of the data obtained, with a determination coefficient (R 2 ) of 0.8921, is as follows: Compressive strength = 0.0538 × e 1.5767×UPV Based on the expression determined in this study and on the experimental data, an upper and lower limit of reliability was calculated considering a 95% confidence level.
Upper limit = 0.1207 × e 1.4072×UPV (6) Lower limit = 0.0193 × e 1.7955×UPV (7) Figure 2 shows the results obtained, the exponential trend, and the calculated limits. The estimated compressive strength is calculated by the exponential regression for the UPV values obtained, as shown in Equation (5). In addition, the relative and absolute errors between the real and calculated values were obtained and shown in Table 3.
The 95% confidence limits can be used to preliminarily estimate the compressive strength of mortars, quickly and inexpensively, from a UPV measurement. Therefore, if we were to obtain a UPV of 4.15 km/s, we would know with 95% confidence that the compressive strength value was 33.35-41.39 MPa.
The absolute and relative errors of the results obtained by exponential regression were also calculated based on the values of the compressive strength test, ranging from 0.21 to 7.23 and from 0.51 to 17.81%, respectively. These errors represent an acceptable approximation for an estimate, for example, in a standard quality study. Sci. 2021, 11, x FOR PEER REVIEW 6 of 11 The estimated compressive strength is calculated by the exponential regression for the UPV values obtained, as shown in Equation (5). In addition, the relative and absolute errors between the real and calculated values were obtained and shown in Table 3. The 95% confidence limits can be used to preliminarily estimate the compressive strength of mortars, quickly and inexpensively, from a UPV measurement. Therefore, if However, sometimes greater accuracy is required when performing an analysis of the physical properties of mortars. Therefore, the trend in four of the study samples from the same deposit was further tested to see if the correlation remained reliable, as the chemical composition would be almost the same.
The results obtained from the natural mordenite from the San Jose-Los Escullos site were isolated following the same process mentioned before. The equation determined from the statistical analysis of exponential regression of the results, with a determination coefficient (R 2 ) of 0.9801, is as follows: Compressive strength = 0.0212 × e 1.8125×UPV (8) The upper and lower reliability limits with a 95% confidence level were: Upper limit = 0.0804 × e 1.5246×UPV (9) Lower limit = 0.003 × e 2.2424×UPV (10) The graphic representation of these results is shown in Figure 3.
the physical properties of mortars. Therefore, the trend in four of the study samples from the same deposit was further tested to see if the correlation remained reliable, as the chemical composition would be almost the same. The results obtained from the natural mordenite from the San Jose-Los Escullos site were isolated following the same process mentioned before. The equation determined from the statistical analysis of exponential regression of the results, with a determination coefficient (R 2 ) of 0.9801, is as follows: Compressive strength = 0.0212 × 1.8125× (8) The upper and lower reliability limits with a 95% confidence level were: Upper limit = 0.0804 × 1.5246× (9) Lower limit = 0.003 × 2.2424× The graphic representation of these results is shown in Figure 3. In this estimation, if we were to obtain a UPV of 4.15 km/s, we would know with 95% confidence that the compressive strength value was between 33.39-44.97 MPa. The absolute and relative errors ranged from 0.20 to 1.6 and from 0.47 to 6.98%, respectively.
For this second exponential regression, a higher R 2 was obtained, which means that the regression is better adjusted to the real data. In addition, both absolute and relative errors were lower than those obtained in the first regression. This may be because the samples were composed of natural materials from the same deposit that have a similar In this estimation, if we were to obtain a UPV of 4.15 km/s, we would know with 95% confidence that the compressive strength value was between 33.39-44.97 MPa. The absolute and relative errors ranged from 0.20 to 1.6 and from 0.47 to 6.98%, respectively.
For this second exponential regression, a higher R 2 was obtained, which means that the regression is better adjusted to the real data. In addition, both absolute and relative errors were lower than those obtained in the first regression. This may be because the samples were composed of natural materials from the same deposit that have a similar chemical composition. However, the size of the samples, which was smaller than in the previous case, can affect the trend.
The same procedure followed for the exponential regression was repeated to study the possible linear trend of the results. The equation determined from the statistical analysis of exponential regression of the results, with a determination coefficient (R 2 ) of 0.873, is as follows: Compressive strength = 53.93 × UPV − 185.55 (11) The upper and lower reliability limits with a 95% confidence level were: Upper limit = 53.939 × UPV − 181.65 (12) Lower limit = 53.939 × UPV − 189.45 (13) The graphic representation of these results is shown in Figure 4.
The upper and lower reliability limits with a 95% confidence level were: Upper limit = 53.939 × − 181.65 (12) Lower limit = 53.939 × − 189.45 (13) The graphic representation of these results is shown in Figure 4. In this estimation, if we were to obtain a UPV of 4.15 km/s, we would know with 95% confidence that the compressive strength value was between 34.27-42.32 MPa. Table 4 shows the results of the estimated compressive strength calculated by Equation (11) for the UPV values obtained. In addition, the relative and absolute errors between the real and calculated values were obtained. In this estimation, if we were to obtain a UPV of 4.15 km/s, we would know with 95% confidence that the compressive strength value was between 34.27-42.32 MPa. Table 4 shows the results of the estimated compressive strength calculated by Equation (11) for the UPV values obtained. In addition, the relative and absolute errors between the real and calculated values were obtained. The absolute and relative errors of the results obtained by lineal regression vary from 0.09 to 6.17 and from 0.34 to 15.21%, respectively. These errors represent an acceptable approximation for an estimate, for example, in a standard quality study.
The errors obtained in linear regression were inferior to those of exponential regression. However, the coefficient of determination of the linear regression was lower, which indicated a worse adjustment of the trend.
It is known that mortars made with pozzolanic cement acquire an increased resistance at greater curing ages compared to mortars without additions [20], so it is expected that at greater ages the estimate obtained with the linear regression would provide compression resistance values below the real ones.
Discussion
The article shows the results of the study of the velocity of ultrasonic waves in mortars composed of cement and pozzolanic additions. The velocity of ultrasonic waves was compared with the results of compression resistance tests from samples containing 25% pozzolanic additions.
The tests took place during the curing ages of 7 and 28 days. As was expected, both the compression resistance and the velocity of ultrasonic waves increased with the age. This shows an upward trend in the curve.
A correlation between the results of the UPV and compression tests can be observed. This correlation is studied through a statistical analysis using linear and exponential regressions through the method of least squares.
After study of the results with the linear and exponential regressions, the exponential regression was found to be more adequate, as it gives a higher coefficient of determination, meaning that it gives a more precise adjustment. This trend also matched the trend of the compression resistance of the pozzolanic mortars, which increased exponentially with the curing age.
We consider it possible to predict the compression resistance from the UPV values in mortars of pozzolanic cement since there is a strong correlation amongst the values obtained in both tests. Nevertheless, the results are not conclusive. This work establishes the basis to perform research with a higher number of samples and greater curing ages to confirm the existence of the exponential relationship between compression resistance and ultrasonic pulse velocity. Factors such as the uncertainty associated with the tests or the heterogeneity between the study samples should be included in future research to determine more precisely the formula that relates the UPV to the compressive strength of these materials. | 5,353.6 | 2021-04-21T00:00:00.000 | [
"Materials Science"
] |
What Drives Public Debt Growth? A Focus on Natural Resources, Sustainability and Development
Public debt is a notable measure of economic and financial sustainability which encountered policy and scholarly interest in the international development ambients. This paper investigates the major drivers of public debt growth in 184 countries. The underlying cross-country survey is conducted on the basis of the improved compilation of datasets on the central government debt for 2013. The study finds that oil abundance, economic growth rate, the share of mineral rent in the total revenue, interest rate payments for foreign borrowings, and being a developing country have a statistically significant impact on the growth of the public debt. In contrast, defence spending, unemployment rate, and inflation rate do not have a statistically significant positive impact on the public debt rate.
INTRODUCTION
Public debt extent is a primary measure of economic and financial sustainability. The topic has often resurged in the wake of financial and economic crises, finding new fashions (Greiner and Fincke, 2016). Public budget and public debt sustainability is no news and have been attracted international financial organisations (Spaventa, 1987). They have often been referred to as the theory of intertemporal budget constraint (Baglioni and Cherubini, 1993), as part of the intertemporal viability of the economic policy (Cisco and Gatto, 2021). Public debt is often studied and considered in cases of economic, financial, and multidimensional crises. It is often referred to as a countercyclical resilience policy instrument to mitigate a system's vulnerability (Gatto and Busato, 2020;Gatto and Drago, 2020). This issue is of relevance for crises connected with resource management and energy markets (Busato and Gatto, 2019).
International development agencies are directly involved in this process, inter alia, in prescribing tailored macroeconomic recommendations to the international community (Gatto, 2020;United Nations, 2008). The latter has the objective of addressing the national development policies to ensure sustainable development to both developing and developed countries. Besides being gauged by means of intertemporal choice, public debt and budget sustainability can also refer to solvency criteria -a controversial regulatory topic within the EU budgeting and monetary policy debate (Hartwell and Signorelli, 2015;Gatto, 2019). Empirical evidence also found that public debt is negatively associated with long-run growth (Eberhardt and Presbitero, 2015).
Sovereign borrowing as a tool of public finance emerged first in the UK after Britain's Glorious Revolution in 1688 (Pincus and Robinson, 2011). Adding to this, America's Revolution in 1776 and the European Enlightenment of the eighteenth century This Journal is licensed under a Creative Commons Attribution 4.0 International License were major events that led to a strengthening of the rule of law, sanctity of contract and parliamentary checks on the power of the heads of the states (Brautigam, 1992;Ferguson, 2014). This, in combination with the incessant money shortage of the state, led to the emergence of central banking. The money shortage and the rise of the division of powers were the results of the permanent wars taking place between European states inside Europe and outside Europe over the colonies (Kennedy, 2010).
Today, public debt is a global phenomenon practised in most countries around the world, whereby developing countries rely more on external than domestic borrowing. This is the result of the underdevelopment of the financial sector in a number of developing and transition economies.
This work aims at proposing a contribution to detecting nexuses existing amongst public debt, sustainability, energy, and military expenditure. The analyses suggest an important role of oil embedment, mineral rent, economic growth rate, and interest rate payments for foreign borrowings in developing countries in public debt increase. On the other hand, we discover that defence spending, unemployment rate, and inflation rate do not play a major role in augmenting public debt rates.
The rest of the paper is organized as follows: Section-II deals with a literature review containing studies on sources and determinants of public debt. Section-III talks about the principal hypotheses of the survey. Section-IV discusses underlying research methodology and data collection. Section-V discusses empirical results. Section-VI presents concluding remarks with policy implications.
Sources of Public Debt
The International Monetary Fund (IMF) defines debt "as all liabilities that require payments of interest and/or principal by the debtor to the creditor at a date or dates in the future. Thus, all liabilities in the Government Finance Statistics system are debt except for shares and other equity and financial derivatives" (IMF, 2001). Printing money, running down foreign exchange reserves, borrowing abroad, and borrowing domestically are four major forms of fiscal deficit financing (Fischer and Easterly, 1990). Printing money fuels inflation and the seigniorage revenue enabled by such a policy is non-linear inflation. Empirical surveys show that printing money has a very limited leeway for combating the budget deficit and at the same time is very costly for macroeconomic stability and economic growth (Easterly and Schmidt-Hebbel, 1991;Bua et al., 2014).
The literature on public debt, especially for low-income countries, focuses on external debt data (Panizza, 2008;Jaimovich and Panizza, 2010). Two factors arise: not only the data availability issue holds but also the fact that government borrowing in most developing countries was made possible mainly over foreign debt sources. The role of the local debt market to finance budget deficits started to increase in the last decade, especially in 2008, during the financial crisis (Bua et al., 2014). Running down the foreign exchange reserves has no inflationary effects. Hence, this policy seems to be more advantageous than increasing the stock of money in the economy. Nevertheless, this policy has its limits and cannot be employed for a substantially long time due to the limits of foreign exchange reserves (Krugman, 1979;Fischer and Easterly, 1990).
Despite this fact, as a short-term policy tool, this strategy could be considered as an appropriate short-term instrument for emergency and crisis situations. Foreign lending does not create an inflationary pressure on the domestic economy nor leads to crowding out of domestic lending to the private sector. This could eventually lead to the appreciation of domestic currency over the increasing demand for the local currency and harm domestic exports (Sachs and Werner, 1995;Rordrik, 2008). Foreign debt financing scales up the pressure on solvency and complicates the exchange rate management (Bua et al., 2014).
Domestic borrowing does not have inflationary pressure on the economy, nor leads to the appreciation of the local currency. The major concerns of domestic borrowing result to be the crowdingout effects of private investments by public investments and increasing domestic interest rates. Domestic borrowing is more common in countries with developed financial institutions. Thus, for a long time, domestic borrowing was latently assumed to be more widespread in the advanced and emerging economies and much less in the low-intensity conflicts (LICs). This opinion was backed by the absence of empirical data on the LICs. This paradigm has changed with the new data on domestic public debt for 36 LICs compiled by Bua et al. (2014). The dataset shows that the substantial share of public debt in these LICs was generated through domestic borrowing. This is attributable to the result of financial liberalization that commenced in the late 1980s and early 1990s (Presbitero, 2012). Based on the dataset built by Bua et al. (2014), it is appreciable as well a slight increase of the already substantial domestic borrowing as the source of public debt ( Figure 1). Domestic debt has increased from 12.3% in 1996 to 16.2% in 2011. The dataset presented in Presbitero (2012) yields the same result.
In addition, Figure 1 also shows the evolution of external debt in the LICs. There has been a steady decline in the external debt ratio over the period 1996-2008, from 72 to 23% in 2011. After 2008, this ratio did not change significantly.
It must be mentioned that domestic debt, especially in developing countries with high inflation rates, is mostly issued in foreign currencies. A textbook case is Zimbabwe during hyperinflation. During the years of hyperinflation, Zimbabwe issued the majority of debt obligations in foreign currencies. However, this is not a problem happening solely to countries experiencing hyperinflation: the overwhelming majority of the LICs issue their public obligations in the currencies which dominate in the international financial and trade relations -i.e. US Dollars, Euro, and Yuan. This is an additional burden on the sovereign default risk because the local governments are not able to control the factors determining the volatility of foreign currency (Mupunga and Le Roux, 2016). Forslund et al. (2011) identify six major categories determining the composition of public debt in developing countries. These are:
Determinants of Public Debt
(1) Macroeconomic imbalances; (2) country size and the level of development; (3) crises and external shocks; (4) openness; (5) exchange rate regime. The macroeconomic imbalances category encompasses inflation, current account balance, level of total public debt and exchange rate misalignment. The second category, country size and level of development is related to indicators such as GDP, per capita income, M2 1 over GDP, and institutional quality. The third category, crises and external shocks, captures the crisis situations related to a sovereign default and other impulsive changes in the current macroeconomic situation. The fourth category sketches trade and capital account openness. The last category, the exchange rate regime, is related to the fixed or floating exchange rates. Karagol and Sezgin (2004), Sezgin (2004), Dunne et al. (2004a, b), Narayan and Narayan (2005), Ahmed (2012), Anfofum et al. (2014), Muhanji and Ojah (2014), Azam and Feng (2015), Karagöz (2018) detect a positive causal relationship between defence expenditure as an important driver of the public debt.
Apart from external debt, military spending is tight with economic growth and investment in the long run (Shahbaz et al., 2016), whereas negative unidirectional causality emerges investigating the relationship from defence spending to economic growth (Shahbaz and Shabbir, 2012); military spending is connected with investment and trade openness, whereas it is negatively correlated with the interest rate (Tiwari and Shahbaz, 2013). It is also reputed that increases in defence spending reduce the pace of economic 1 Money supply measure, as defined by the Federal Reserve.
growth, while current economic growth is connected with the growth of previous periods, and that non-military expenditures rises can boost economic growth .
The relationship between oil abundance and public debt issues has not been yet studied exhaustively. Despite the intuition that the economies with substantial petroleum revenues should have a lower public debt share, and consequently a lower sovereign default risk (Sadik-Zada, 2016), this ascertainment is not generally valid. Hamann et al. (2016) and Arias and Restrepo-Echavarria (2016) show that this is by far not the case. Figure 2 depicts the average public debt for 25 net oil exporters between 1979 and 2010.
The cross-country average public debt to GDP ratio is 50%, ranging from 8% (UAE) to 179% (Sudan). As shown in Figure 3, only 8 of 25 countries did not have default episodes (Borensztein andPanizza, 2008, Arias andRestrepo-Echavarria, 2016). The major problem in the public finance of the oil-producing economies is the volatility of oil prices. Increasing oil prices lead to rising oil extraction and higher GDP growth rates, improvements of trade balance and current accounts, lower sovereign risk perception, and reduce default risk. In the phases of shrinking oil prices, the opposite happens, and the default risk increases substantially (Arias and Restrepo-Echavarria, 2016).
THEORETICAL FRAMEWORK AND HYPOTHESES
Fiscal policy targets do stimulate the economy especially during or before a recession. The constitutive feature of the recession is the negative growth rate at least for 6 months (Sadik-Zada, 2000 and 2016). Thus, we assume that especially in times of very low or negative growth rates the governments employ public debt as an anticyclical stimulation instrument. Based on this assumption, we test the following hypothesis: Hypothesis 1: Economic growth has a negative growth effect on public debt.
Armed with the same logic, we assume that especially in the recession phases with high pressure on the job market, governments employ public debt as a tool to compensate the recessive impulses by the positive fiscal impulses and to curb the job market.
To test for the relationship between the unemployment rate and public debt, we test the following hypothesis: Hypothesis 2: There is a positive relationship between the unemployment rate and public debt.
To combat the recession, governments increase public investments mainly financed over public debt. This is especially the case of recession phases due to decreasing tax revenues.
To assess the relationship between public debt and gross capital formation, we test the following hypothesis: Hypothesis 3: There is a positive relationship between gross capital formation (GCF) and the public debt ratio in the short run.
Increasing defence spending, especially in developing countries, does not have strong positive effects on economic growth and is not considered as an anticyclical instrument. In fact, the majority of developing countries import most armament from advanced economies. The increasing or high share of defence spending as a budget item is a sign of the existence of security risks.
In the next hypothesis, we test for the effect of defence spending on public debt.
Hypothesis 4: There is a positive relationship between defence spending and the public debt ratio. (2017) have shown that the existence of sovereign wealth funds (SWFs) in petroleum-rich countries also serve actively as an anticyclical tool. The availability of the transfers from these SWFs to the state budgets could lead to fungibility between these transfers and the public debt.
Mohaddes and Raisi
Thus, we test this in the following hypothesis: Hypothesis 5: Petroleum (mineral) abundance has a negative impact on the public debt ratio.
In order to take into account the structural differences between advanced and developing/transition economies, we include a dummy variable, which takes the value 1 for all developing and transition economies and 0 for the advanced economies. This variable captures also partly the diverging effect of the defence sector on the rest of the economy in these two groups.
Hypothesis 6: There is a difference between developing/transition and advanced economies in public debt levels.
The countries with a high level of public debt have a higher share of the interest rate as a share of public debt than the countries with moderate public debt. We also want to assess the impact of the indebtedness on the level of additional indebtedness and employ the interest rate payments as an independent variable.
Hypothesis 7: There is a positive relationship between interest rate payments and the public debt share.
Data
The data on public debt has become more comprehensive, more accurate, and more readily available in recent years due to the efforts of Abbas et al. (2011), Jaimovich andPanizza (2010), and Bova et al. (2014). Bua et al. (2014), introduced a new dataset on the stock and structure of domestic public debt in 36 Low-Income Countries over the period 1971-2011. This dataset provides not only the information on the stock of public debt and interest payments, but also encompasses the information on maturity, currency composition, creditor base, and type of financial instruments. For our analysis, we employ the data compilation provided by the last version of the World Development Indicators (2018) which incorporates the data sources mentioned above. We should stress our data collection and treatment choices. These choices were based on methodological indications provided in Gatto et al. (2021). For the sake of completeness, we take the data of 2013. This decision is driven by data availability, and to avoid data loss or imputation: we chose the most recent, standard, and representative year in terms of data, 2014, presenting 2017 a lot of missing values. The years 2013 to 2015 are more complete. Nevertheless, to avoid a structural break, we take the observations for 184 countries before the dramatic shrinkage of the oil prices in November 2014.
Methodology
For the assessment of the major determinants of public debt, this study applies a cross-country linear regression approach with data for 184 countries. To interpret the regression coefficients as elasticities, i.e., in percentages and to normalise the data, the natural logarithm of the dependent and all the independent variables are taken. To test for the existence of heteroscedasticity Breush-Pagan test was applied. The test result indicates the absence of heteroscedasticity in the dataset (Appendix 1). To assess the differences in the level of public debt between the advanced and developing economies, we employ a dummy-variable strategy. We classify all the EU-member states and all the high-income countries with a per capita income over 30000 in constant 2010 US Dollars as developed countries. Except for the UAE and Qatar, all the Gulf States are classified as developing countries.
The natural logarithm (ln) of the share of the central government debt in GDP (lngY) is the dependent variable; ln of the inflation rate (lnINFLAT), ln of the unemployment rate projected by the International Labour Organization (ILO), ln of the unemployment rate (lnUEMP), ln of the share of the oil rents as a share of GDP (lnOilRent), ln of the share of the defence spending as a share of GDP (lnDEFENCE), gross capital formation as a share of GDP (lnINV), ln of the mineral rent as a share of GDP (lnMINERAL) and ln of the interest payment for the public debt (lnINTEREST) are the independent variables.
RESULTS
In the framework of the regression analysis, seven regression equations were conducted. The first estimation is a bivariate regression with only GDP growth (lngY) as the explanatory variable. Based on the regression output, a 1% increase in economic growth leads to a -3.32% decrease in public debt. In all the 7 estimations lngY has a statistically negative impact on the public debt. The coefficient of lngY, β 1 , varies between -2.85% and -6.34%. This indicates the negative nexus between the GDP growth and the level of public debt and corroborates Hypothesis 1 (Economic growth has a negative growth effect on public debt). Figure 4 and the fitted linear regression line (fitted values) also indicate a negative relationship between the growth rate of GDP and the public debt ratio.
Inflation rate (lnINFLAT), unemployment rate (lnUEMP), and defence spending (lnDEFENCE) have no statistically significant impact on the public debt. This result rejects Hypothesis 2 and shows that there is no statistically significant relationship between unemployment (inflation) and the level of public debt. The share of oil rent (lnOILRent) and mineral rent as a share of GDP (lnMINERAL) has a statistically significant negative impact on the dependent variable (equations (4) and (5) for oil and equation (6) for mineral rent).
In Equation (6) we included gross capital formation as a share of GDP (lnINV) as a control variable to test Hypothesis 3. Estimation output rejects this hypothesis and shows that there is no statistically significant relationship between gross capital formation, which is a proxy for total investment share in GDP), and public debt.
The coefficient of lnOilRent varies between (-0.177) and (-0.196). This implies that an increase of the oil revenues by 1% leads to a decrease of the public debt by 1.77 (1,96%) (Equations (4) and (5)). Figure 5 also indicates the negative relationship between oil rent as a share in total public revenue and public debt.
lnMINERAL, another proxy for the natural resource abundance, also has a statistically significant negative impact on the level of public debt: 1% increase of the mineral rent as a share of GDP leads to a 0.05-0.06% decrease of public debt. We can observe that oil abundance has a much stronger impact on public debt than mineral rent. These results corroborate Hypothesis 4. This implies a positive relationship between resource abundance and fiscal stability. Interest payments (public debt-related) as a share of total revenue have a statistically significant positive impact on the level of public debt: An increase of the interest payments by 1% lead to an increase of the public debt by 0,593%.
In order to control for the difference between developing and developed countries, we add a dummy variable, DEVELOPING, Source: Authors' illustration which take the value 1 if the country in the dataset is a developing or transition economy, and 0 if the country is a developed country with a high-income level or an EU-member country. We find that being a developing country has a statistically significant negative impact on public debt. Being a developing country leads on average to a 6,5% decrease of public debt as a share of GDP.
As shown in the estimation output sketched in Table 1, the coefficients of determination in the estimations range between 16.3 and 75.5%. This implies that all the regression models explain a substantial share (at least 16,3% and at utmost 75,5%) of the variations of the dependent variable, i.e. lnDebt.
CONCLUDING REMARKS
In this study, we have analysed public debt dynamics. Indeed, the public budget is a major economic and financial sustainability driver and an international development policy issue. We have also explored public debt connections with natural resources and energy, defence expenditure, unemployment rate and the country's stage of development and sustainability issues.
A cross-country regression survey shows that a greater growth rate of the aggregate GDP has a statistically negative impact on public debt as a share of GDP. This effect vanishes if we include the developing country dummy in Equation (8). Unemployment has a statistically significant impact on the level of public debt only in the last regression Equation (8). Interest payments also have a statistically significant positive impact on the level of public debt (Equations (7) and (8)). Oil rent as a share of total revenue (Equations (4) and (5)) has a statistically significant negative impact on public debt. The same applies to the mineral rent as a share of total revenue (Equations (6) and (7)). Defence spending does not have a statistically significant impact on the level of public debt.
Future studies might take into account further research questions arising from this work. Upcoming analyses may want to examine more closely endogeneity and eventual multicollinearity issues that have found no space in this study. These problems might be solved by corroborating the estimation results by making use of diverse techniques and tests. For this purpose, further elaboration of the econometric strategy would benefit the validity of the analyses undertaken. Authors' own regression estimations. Robust standard errors in parentheses. ***P<0.01, **P<0.05, *P<0.1 | 5,166 | 2021-08-08T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
Lagrangian solutions to the Vlasov-Poisson system with a point charge
We consider the Cauchy problem for the repulsive Vlasov-Poisson system in the three dimensional space, where the initial datum is the sum of a diffuse density, assumed to be bounded and integrable, and a point charge. Under some decay assumptions for the diffuse density close to the point charge, under bounds on the total energy, and assuming that the initial total diffuse charge is strictly less than one, we prove existence of global Lagrangian solutions. Our result extends the Eulerian theory of [16], proving that solutions are transported by the flow trajectories. The proof is based on the ODE theory developed in [8] in the setting of vector fields with anisotropic regularity, where some components of the gradient of the vector field is a singular integral of a measure.
Introduction and main results
We study the Cauchy problem associated with the Vlasov-Poisson system in the three dimensional space, where f : R + × R 3 × R 3 → R + stands for the non-negative density of particles in a plasma under the effect of a self-induced field E, while ρ : R + × R 3 → R + is the spatial density and γ ∈ {−1, 1} is a parameter which models the repulsive (γ = 1) or attractive (γ = −1) nature of the particles. We recall that the self-induced field E(t, x) is a conservative force. Therefore there exists a function U : R + × R 3 → R such that E(t, x) = ∇ x U (t, x), thus the Poisson equation −∆U = ρ is fulfilled. In other words, we can rewrite the system (1.1) as a Vlasov equation coupled with a Poisson equation, from which the name Vlasov-Poisson arises. From a physical viewpoint, the repulsive case represents the evolution of charged particles in presence of their self-consistent electric field and it is used in plasma physics or in semi-conductor devices. The attractive case describes the motion of galaxy clusters under the gravitational field with many applications in astrophysics. In this paper we focus on the repulsive case, by fixing γ = 1 in (1.1). In the last decades the Vlasov-Poisson system (1.1) has been largely investigated. Existence of classical solutions under regularity assumptions on the initial data goes back to Iordanski [21] in dimension one and to Okabe and Ukai [26] in dimension two. The three dimensional case has been addressed first by Bardos and Degond [6] for small initial data, and then extended to a more general class of initial plasma densities by Pfaffelmoser [28] and by Lions and Perthame [22]. Improvements in three dimensions have been obtained in [30,32,12,23,13]. Global existence of weak solutions has been studied by Arsenev [5] for bounded initial data with finite kinetic energy, while the global existence of renormalized solutions is due to Di Perna and Lions [17], assuming finite total energy and f 0 ∈ L log L(R 3 ×R 3 ). The latter assumption has been recently relaxed to f 0 ∈ L 1 (R 3 × R 3 ) in [3] and [7].
One might wonder what happens when f 0 / ∈ L 1 (R 3 × R 3 ). In this paper we shall address this question by assuming f 0 to be the sum of an integrable bounded plasma density and a Dirac mass. This is equivalent to studying the Cauchy problem associated with the following system: x−y |x−y| 3 ρ(t, y) dy , where the singular electric field F := F (t, x) is induced by a point charge located at a point ξ(t), whose evolution is given by the Newton equations: η(t) = E(t, ξ(t)) .
The model (1.2)-(1.3) has been recently introduced by Caprino and Marchioro in [10], where they have shown global existence and uniqueness of classical solutions in two dimensions. This result has been extended to the three dimensional case in [24] by Marchioro, Miot and Pulvirenti. Both [10] and [24] require that the initial plasma density does not overlap the point charge. This assumption has been relaxed in [16], where weak solutions of the system (1.2)-(1.3) have been obtained for initial data which may overlap the point charge, but do have to decay close to it. The price to pay is that the solution is no longer known to be unique and Lagrangian. In the following we will call Lagrangian solution a plasma density f and a trajectory (ξ, η) of the Dirac mass, both defined for t ∈ R + , such that f is transported by the Lagrangian flow (X, V ), solution to the ODE-system This is a finer physical structural information on the solution than the mere fact that f and (ξ, η) are weak solutions of (1.2)-(1.3).
In the framework of classical solutions, the Eulerian description and the Lagrangian evolution of particles given by the system of characteristics are completely equivalent. When dealing with weak or renormalized solutions, the correspondence between the Eulerian and Lagrangian formulations is non trivial and requires a careful analysis of the Lagrangian structure of transport equations with non-smooth vector fields. Indeed, without any regularity assumptions, it is not even clear whether the flow associated with the vector field generated by a weak solution exists.
In recent years the theory of transport and continuity equations with non-smooth vector fields has witnessed a massive amount of progress, also due to the large number of applications to nonlinear PDEs. In the seminal paper by DiPerna and Lions [17] the theory has been first developed in the context of Sobolev vector fields, with suitable bounds on space divergence and under suitable growth assumptions. This has been extended by Ambrosio [1] to the setting of vector field with bounded variation (BV ), roughly speaking allowing for discontinuities along codimension-one hypersurfaces. See also [4] for an up-to-date survey of this theory and its recent advances.
In the context of the Vlasov-Poisson system with a Dirac mass considered in this paper ((1.2)-(1.3)) the system of characteristics is given by (1.4). The singular electric field F generated by the Dirac mass is not regular, and it does not even belong to any Sobolev space of order one or to the BV space. Therefore the theory of [17,1] cannot be directly applied to this case. However, a related theory of Lagrangian flows for non-smooth vector fields has been initiated in [15]. In a nutshell, the approach in [15] provides a suitable extension of Grönwall-like estimates to the context of Sobolev vector fields, by introducing a suitable functional measuring a logarithmic distance between Lagrangian flows. In addition, the theory in [15] has a quantitative character, providing explicit rates in the stability and compactness estimates, and it has been pushed even to situations out of the Sobolev or BV contexts of [17,1]. In particular, using more sophisticate harmonic analysis tools, the case when the derivative of the vector field is a singular integral of an L 1 function has been considered in [14]. This has been further developed in [8], allowing for singular integrals of a measure, under a suitable condition on splitting of the space in two groups of variables, modelled on the situation for the Vlasov-Poisson characteristics (1.4). This theory has been applied to the study of the Euler equation with L 1 vorticity [9] and of the Vlasov-Poisson equation with L 1 density [7]. The latter has also been studied in [3], using the theory of maximal Lagrangian flows developed in [2].
The purpose of this paper is to recover the relation between the Eulerian and the Lagrangian picture for solutions provided in [16] by exploiting the transport structure of the equation. In other words we aim to prove existence of Lagrangian solutions to the Vlasov-Poisson system (1.1) with γ = 1 and initial data f 0 + δ ξ 0 ⊗ δ η 0 , where f 0 satisfies the assumptions of [16].
Our main result is the following , such that the initial total charge and the total energy is finite. Assume that there exists m 0 > 6 such that for all m < m 0 the energy moments ) and ξ ∈ C 2 (R + ).
2. We observe that the hypothesis (1.5) is needed only to get a control on the electric field generated by the point charge (see Proposition 3.6). This means that the charge of the plasma has to be smaller than the charge associated with the Dirac mass. From the viewpoint of physics, this is a purely technical and too restrictive condition. In a forthcoming paper, we plan to remove this constraint.
3. When considering the Cauchy problem associated with (1.1) with γ = −1 (attractive case) and initial data f 0 + δ ξ 0 ⊗ δ η 0 , the whole strategy fails. This is due to a crucial change of sign in the total energy H and in H m . More precisely, the last two terms in (1.6) and the last term in (1.7), representing respectively the potential energy of the system and the potential energy per particle, come with a negative sign. This prevents to establish a control on the trajectory of the point charge as in Proposition 3.3 and to prove Proposition 3.7.
The simpler case of a system in which the particles in the plasma are interacting through a repulsive potential while the point charge generates an attractive force field has been treated in [11] in dimension two. Notice that, even in this case, the existence of solutions in three dimensions remains an interesting open problem.
4. Theorem 1.1 does not imply uniqueness of the Lagrangian solution. In analogy to [28], where uniqueness of compactly supported classical solutions of (1.1) has been proved, uniqueness of solutions to (1.2)-(1.3) which do not overlap with the point charge and have compact support in phase space has been established in [24]. In the context of weak solutions to (1.1), sufficient conditions for uniqueness have been proved in [22] and later extended to weak measure-valued solutions with bounded spatial density by Loeper [23]. Recently Miot [25] generalised the latter condition to a class of solutions whose L p norms of spatial density grow at most linearly w.r.t. p, then extended to spatial densities belonging to some Orlicz space in [20]. Unfortunately, it seems that none of these conditions apply to our setting and new ideas are needed.
Let us informally describe the main steps of our proof. We rely on the result in [24], which guarantees existence of a (unique) Lagrangian solution to the Cauchy problem for the Vlasov-Poisson system (1.2)-(1.3), provided that at initial time the plasma density has a positive distance from the Dirac mass and bounded support in the phase space. We therefore approximate the plasma density f 0 at initial time by a sequence f n 0 obtained by cutting off f 0 close to the Dirac mass in the space variable and out of a compact set in phase space. We use [24] to construct a Lagrangian flow (X n , V n ) and a trajectory for the Dirac mass (ξ n , η n ) corresponding to the initial data f n 0 and (ξ 0 , η 0 ). The assumptions of Theorem 1.1 together with the propagation of the moments H from [16] entail some additional integrability of the densities ρ n , which in turn implies uniform Hölder estimates on the electric fields E n . Moreover, assumption (1.5) allows to prove some uniform decay of the superlevels of the Lagrangian flows (X n , V n ), which combined with an extension of the Lagrangian theory developed in [8] gives compactness of the Lagrangian flows (X n , V n ). Finally, standard energy estimates guarantee the uniform continuity of the trajectories ξ n uniformly in n. All this enables us to pass to the limit in the Lagrangian formulation of the problem, eventually giving a Lagrangian solution corresponding to the initial plasma density f 0 .
One of the main technical difficulties of our analysis is the control on large velocities. In this work, this reflects in the necessity of some control on the superlevels of the Lagrangian flows. This was already an issue in [7] and here the situation is made even more complicated by the presence of the singular field generated by the point charge. We tackle this problem by weighting superlevels with the measure given by the initial distribution of charges f 0 (x, v) dx dv (see Lemma 4.1). In this way the control on the superlevels can be proven exploiting virial type estimates on the time integral of the electric field generated by the diffuse charge and evaluated in the point charge (see Proposition 3.6). This carries the physical meaning that it is only relevant to control the flow starting from points in the support of the initial density of charge.
In connection to the theory of [8], this weighted estimates manifests in the presence of the density h = f 0 in the functional (2.12) measuring the compactness of the flows. Moreover, in contrast to [7], which was based on the isotropic analysis of [14], here we strongly rely on the anisotropic theory of [8] in which some components of the gradient of the velocity field are allowed to be singular integrals of measures, accounting for the presence of the point charge.
The plan of the paper is the following: in Section 2 we present and prove the key theorem on Lagrangian flows; in Section 3 we recall some useful properties related to solutions of the Vlasov-Poisson system; in Section 4 we give the proof of Theorem 1.1, which follows from compactness arguments by using the results established in Section 2 and 3.
Lagrangian flows
Consider a smooth solution u to a transport equation in with initial data Z(t, t, z) = z. Thus the solution can be expressed as u(t, z) = u 0 (Z(0, t, z)).
For simplicity from now on we will consider the initial time t in (2.1) fixed and denote the flow Z(s, t, z) by Z(s, z).
In this paper we deal with flows of non-smooth vector fields. In order to extend the usual notion of characteristics to our case, we extend the definition of regular Lagrangian flows in a renormalized sense by introducing a reference measure with bounded density. This turns out to be convenient in the estimates involving the superlevels of the flow (see Lemma 4.1). is a µ-regular Lagrangian flow in the renormalized sense starting at time t relative to b if we have the following: (1) The equation (3) There exists a L ≥ 0, called compressibility constant, such that, for every s ∈ [t, T ], We have denoted with L 0 loc the space of measurable functions endowed with the local convergence in measure, by log log L loc the space of measurable functions u such that log(1 + log(1 + |u| 2 )) is locally integrable, and by B the space of bounded functions. When the reference measure µ is not explicitly specified, the spaces under consideration are endowed with the Lebesgue measure.
Remark 1. Our definition of µ-regular Lagrangian flow slightly differs from the one in [8].
On the one hand we change the reference measure from the Lebesgue measure to µ. On the other hand we consider a different class of β's, which grow slower at infinity. (2.4)
Setting and result of [8]
We summarize here the regularity setting and the stability estimate of [8]. We say that a vector field b satisfies (R1) if b can be decomposed as . Notice that this hypothesis leads to an estimate for the decay of the superlevels of a regular Lagrangian flow. In fact Lemma 3.2 of [8] tells us that, if b satisfies (R1) and Z is a regular Lagrangian flow associated with b starting at time t, with compressibility constant L, then L d (B r \ G λ ) ≤ g(r, λ) for any r, λ > 0, where g depends only on L, b 1 L 1 ((0,T );L 1 (R d )) and b 2 L 1 ((0,T );L ∞ (R d )) and satisfies g(r, λ) ↓ 0 for r fixed and λ ↑ ∞.
(R2) We want to consider a vector field b(t, z) such that its regularity changes with respect to different directions of the variable z ∈ R d , that is we consider R d = R n 1 × R n 2 and z = (z 1 , z 2 ) with z 1 ∈ R n 1 and z 2 ∈ R n 2 . We denote with D 1 the derivative with respect to z 1 and D 2 the derivative with respect to z 2 . Accordingly we denote b = (b 1 , b 2 )(s, z) ∈ R n 1 ×R n 2 and Z = (Z 1 , Z 2 )(s, z) ∈ R n 1 × R n 2 . Therefore we assume that the elements of the matrix Db, denoted as (Db) i j , are in the form where -S i jk are singular integral operators associated with singular kernels of fundamental type in R n 1 (see [31]), We have denoted by We recall the main theorem from [8].
Theorem 2.3. Let b andb be two vector fields satisfying assumption (R1), where b satisfies also (R2), (R3). Fix t ∈ [0, T ] and let Z andZ be regular Lagrangian flows starting at time t associated with b andb respectively, with compressibility constants L andL. Then the following holds. For every γ, r, η > 0 there exist λ, C γ,r,η > 0 such that The constants λ and C γ,r,η also depend on: • The equi-integrability in L 1 ((0, T ); L 1 (R n 1 )) of all the m i jk which belong to this set, as well as the norm in L 1 ((0, T ); M(R n 1 )) of the remaining m i jk (where these functions are associated with b as in (R2)), • The norms of the singular integrals operators S i jk , as well as the norms of γ i jk in L ∞ ((0, T ); L q (R n 2 )) (associated with b as in (R2)), • The compressibility constants L andL.
Flow estimate in the new setting
We are going now to state a variant of this theorem, where (R1) and (R2) are replaced by (R1a) and (R2a) below. The dimension d will be here equal to 2N , instead of n 1 + n 2 , and the variable z will be in the form We consider the following assumptions, that are adapted to our setting of the Vlasov-Poisson system with a point charge: (R1a) For all µ-regular Lagrangian flow Z : [t, T ] × R 2N → R 2N relative to b starting at time t with compression constant L, and for all r, λ > 0, where G λ denotes the sublevel of the flow Z defined in (2.2).
and where b 2 is such that for every j = 1, . . . , N , where S jk are singular integrals of fundamental type on R N and m jk ∈ L 1 ((0, T ); M(R N )).
Proof. The proof follows the same line as in Theorem 2.3 (see [8]), with some modifications due to the different hypotheses. Given δ 1 , δ 2 > 0, let A be the constant 2N × 2N matrix We consider the following functional depending on the two parameters δ 1 and δ 2 , with δ 1 ≤ δ 2 : In order to improve the readability of the following estimates, we will use the notation " " to denote an estimate up to a constant only depending on absolute constants and on the bounds assumed in Theorem 2.4, and the notation " λ " to mean that the constant could also depend on the truncation parameter for the superlevels of the flow λ. The norm of the measure m however will be written explicitly.
Step 1: Differentiating Φ δ 1 ,δ 2 . Differentiating with respect to time and taking out of the integral the L ∞ norm of h, we get Then we set Z(s, x, v) = Z andZ(s, x, v) =Z and we estimate After a change of variable along the flowZ in the first integral, and noting that δ 1 ≤ δ 2 , we further obtain Step 2: Splitting the quotient. Using the special form of b from (R2a) and the action of the matrix A −1 , we have Step 3: Definition of the function U. Using assumption (R2a), we can now use the estimate of [14] on the difference quotient of b 2 , where U for fixed s is given by with M j a certain smooth maximal operator on R N x .
Step 4: We can estimate the L p (Ω) norm of Ψ by considering the first element of the minimum and changing variables along the flows: (2.14) Considering now the second element of the minimum and eq.n (2.13), we can also bound the M 1 (Ω) pseudo-norm of Ψ (where M 1 is the Lorentz space): From Theorem 2.10 in [8], we know Step 5: Interpolation. We have now the ingredients to apply the Interpolation Lemma 2.2 in [14], which allows to bound the norm in L 1 (Ω) of Ψ using Ψ L p (Ω) and |||Ψ||| M 1 (Ω) as follows: .
Uniqueness, stability and compactness
In this subsection we use the result obtained in Theorem 2.4 to show uniqueness, stability, and compactness of the regular Lagrangian flow. • The measure of the superlevels associated with Z n in hypothesis (R1a) is bounded by some functions g n (r, λ) which go to zero uniformly in n as λ → ∞ at fixed r, • The sequence {L n } is equi-bounded.
Then the sequence {Z n } converges to Z locally in measure with respect to µ in R 2N , uniformly in s and t.
Proof. We setb = b n andZ = Z n in Theorem 2.4, then there exist two positive constants λ and C γ,r,η , which are independent of n, such that for all s ∈ [0, T ] it holds In particular, for any r, γ > 0 and any η > 0, we can choosen large enough so that µ(B r ∩ {|Z(s, ·) − Z n (s, ·)| > γ}) ≤ 2η for all n ≥n and s ∈ [t, T ], which is the thesis. • The measure of the superlevels associated with Z n in hypothesis (R1a) is bounded by some functions g n (r, λ) which go to zero uniformly in n as λ → ∞ at fixed r, • For any compact subset K of R 2N , is equi-bounded in n and s, t, • For some p > 1 the norms b n L p ((0,T )×Br ) are equi-bounded for any fixed r > 0, • The norms of the singular integral operators associated with the vector fields b n (as well as their number m) are equi-bounded, • The norms of m n jk in L 1 ((0, T ); M(R N )) are equi-bounded in n. Then as n → ∞ the sequence {Z n } converges to some Z locally in measure with respect to µ, uniformly with respect to s and t, and Z is a regular Lagrangian flow starting at time t associated with b.
Proof. We apply Theorem 2.4 with b = b n andb = b m . Observe that the compressibility constants L andL of the same theorem are equal to 1. Indeed b andb are divergence free as they both satisfy assumption (R2a). Hence we have for any r, γ > 0 µ(B r ∩ {|Z n (s, ·) − Z m (s, ·)| > γ}) → 0 as m, n → ∞, uniformly in s, t.
Thus it follows that Z n converges to some Z ∈ C([t, T ]; L 0 loc (R 2N , dµ)) locally in measure with respect to µ, uniformly in s, t. The uniformity in n and s, t of the bound (2.23) implies Z ∈ B([t, T ]; log log L loc (R 2N , dµ)). We notice that conditions (2) and (3) in Definition 2.1 are satisfied, since thanks to (R2a) the vector fields b n are divergence free. We are left with the proof of condition (1). Observe that a β ∈ C 1 (R 2N ) can be approximated by a sequence of β ǫ ∈ C 1 c (R 2N ), therefore it suffices to show condition (1) for this latter class of functions. To this end we want to perform the limit in n of equation (2.2) written for Z n and b n . From the convergence in measure of Z n to Z and the fact that Z n and Z lie in a fixed ball B r (the support of β ǫ ) it follows the convergence in distributional sense of β ǫ (Z n ) to β ǫ (Z) and of β ′ ǫ (Z n ) to β ′ ǫ (Z). While using the uniform bound of b n L p ((0,T )×Br ) and Lusin's Theorem, we get convergence in L 1 loc of b n (Z n ) to b(Z). Thus we have convergence in the sense of distribution to equation (2.2).
The above compactness statement does not directly translate into an existence result for Lagrangian flows, since in general it is not trivial to find a sequence b n approximating b as in the hypotheses of Corollary 2.7. This is due to the fact that the function g(r, λ) in Lemma 4.1 does not depend only on bounds on the vector field, but also on bounds on the density of charge. We are able to do this in the specific case of the flow associated with the Vlasov-Poisson equation (solution to (1.4)) and therefore we postpone this to Section 4.
Useful estimates
In this Section we recall some well known a priori estimates on physical quantities related to the Vlasov-Poisson equation and we adapt them to the context of the system (1.2)-(1.3).
where C is a constant depending only on s.
Proposition 3.2 (Mass and energy conservation). Let
be respectively the total mass and the total energy associated with the system (1. Proof. It follows from direct inspection by performing the time derivative of M (t) and H(t).
As a consequence of Proposition 3.2, we observe that if the energy H(t) is assumed to be initially finite, then it is bounded for all times. This ensures in particular that the velocity of the Dirac mass located at ξ(t) is finite. Proof. We observe that H(t) is a sum of positive terms. Notice that here we are heavily using the electrostatic nature of the particles in the plasma. In the gravitational case, the total energy has a nonpositive term. By Proposition 3.
Then there exists a constant C > 0, which only depends on m, such that . (3.6) Proof. By definition of ρ we have . (3.7) Fix R > 0 and split the integral in the v variable into two pieces: By optimising in R in the last line of the above inequality, we get . (3.8) We plug (3.8) in (3.7) and we obtain (3.9) (1.2). Assume the total energy to be initially finite, then ρ(t, ·) ∈ L 1 ∩ L 5/3 (R 3 ) and E(t, ·) ∈ L q (R 3 ), for any Proof. The bound ρ(t, ·) ∈ L 5/3 (R 3 ) follows by Proposition 3.4 for m = 2. The estimate on the electric field is a consequence of Proposition 3.1 for s = 1 and s = 5 3 .
The following two propositions regard specifically the case in which we deal with a Dirac mass and their proof relies on the condition that the total charge M (0) has to be strictly less than one. This is the only reason why we need to assume (1.5) in Theorem 1.1. (3.10) Proof. For s ∈ [0, T ], consider (X(s, x, v), V (s, x, v)) solution to the characteristic system (1.4) with initial data (x, v). We now use the shorter notation (X(s), V (s)) and compute Then we obtain (3.11) By integrating the above expression w.r.t. time and the measure f 0 (x, v) dx dv, we get (3.12) The first term in the r.h.s. of (3.12) can be bounded as follows where we used Hölder inequality and the conservation of mass and energy in the latter estimate. The second term in (3.12) is bounded by means of Hölder inequality and Proposition 3.5: (3.14) We use (3.13) and (3.14) in the r.h.s. of (3.12) and we obtain that concludes the proof since M (0) < 1.
(ii) There exists m 0 > 6 such that for all m < m 0 Then there exists a global weak solution (f, ξ) to the system (1. Moreover, for all t ∈ R + and for all m < min(m 0 , 7), where C and c only depend on the initial data. Let f 0 and (ξ 0 , η 0 ) be the initial data of system (1.2), satisfying the hypotheses of Theorem 1.1. We consider the approximating initial densities given by Thanks to [24], this choice ensures existence and uniqueness of f n and (ξ n , η n ), solutions to the Vlasov-Poisson system (1.2)-(1.3). Moreover f n is a Lagrangian solution, i.e.
(4.4)
From now on the abstract measure µ of Section 2 will be set as µ = f 0 L 2N , where f 0 is the initial density of our problem. In order to apply Corollary 2.7, we need then the approximating vector fields b n (s, x, v) = (v, E n (s, x) + F n (s, x)) to satisfy hypotheses (R1a), (R2a), and (R3) "uniformly" in n (with equi-bounds on the quantities involved) and the bound (2.23). Furthermore we set the dimension N equal to 3.
Proof of (R1a) + equibound: control of superlevels In [8] a control on the superlevels was obtained using hypothesis (R1) which provided an upper bound on the integral of log(1 + |Z|). Without assumption (R1), we need estimates on |V | 2 in order to control the superlevels. This requires integrating a function which grows slower than log(1 + |V |) at infinity. Furthermore, differently from [7], we will bound the superlevels of Z with respect to the measure µ = f 0 L 6 . For the sake of clarity we will use the notation f 0 (B) to indicate the measure µ of a set B ⊆ R 6 . The result is the following lemma, whose proof is postponed to Subsection 4.2.
be the µ-regular Lagrangian flow relative to b starting at time t, with sublevel G λ . Assume M (0) < 1. Then, for all r, λ > 0, we have , H(0), and g(r, λ) ↓ 0 for r fixed and λ ↑ ∞. Notice that this lemma holds also for the regularized problem (system (1.2)-(1.3) with initial density f n 0 ). Therefore we have, for all r, λ > 0, where g n converges to zero for r fixed and λ ↑ ∞. Moreover, this convergence is uniform in n. Indeed the proof of Lemma 4.1 entails the functions g n to be increasing with respect to the norms of E n , U n , F n , f n , and with respect to H n (0). These quantities are in turn all bounded by the same quantities without the index n. Therefore, due to the choice of the initial densities of the regularized problem, we have where g n (r, λ) depends on the norms of E, U , F , f and on H(0), and tends to zero as λ → ∞ uniformly in n. Moreover the last two terms tend to zero as n → ∞ by Lebesgue's Dominate Convergence Theorem. Hence we have, for any fixed ǫ, r > 0, that there exist λ > 0 and N ∈ N such that f 0 (B r \G n λ ) ≤ ǫ (4.7) for each n ≥ N .
Proof of (R2a): spatial regularity x), we observe that the Lipschitz constants of b n 1 and b 1 are trivially equi-bounded. We are left to show that the derivatives of b n 2 and b 2 are singular integrals of fundamental type on R 3 of finite measures, and that the norms of the kernels associated with the singular integral operators and those of the measures in L 1 ((0, T ); M(R 3 )) are equi-bounded. We compute, outside of the origin, Therefore ∂ x j (b 2 ) i is a singular integral of the finite measure ρ + δ ξ(t) , with kernel The kernel satisfies conditions of Def.2.13 in [14], therefore it is a singular kernel of fundamental type. Similarly we have ∂ x j (b n 2 ) i = K ij (·) * (ρ(t, ·) + δ ξn(t) ), hence also ∂ x j (b n 2 ) i are singular integrals of finite measures, with equi-bounded kernels and equi-bounds on the measures' norms.
Proof of (R3)
We shall prove now that the L p -norms of b and b n in (0, T ) × B r are equi-bounded, for some p > 1 and for any fixed r > 0. Through an easy computation we notice that the M 3/2 -pseudo-norms of F and F n are equi-bounded and uniform in t: Similarly we have that the L 1 -norms of F and F n are equi-bounded in (0, T ) × B r for any r > 0: sup Furthermore Propositions 3.1 tells us that E and E n belong to L ∞ ((0, T ); M 3/2 (R 3 )), with the respective pseudo-norms which are equi-bounded in n. Therefore the second component of the vector fields b and b n (i.e. E + F , E n + F n ) are equi-bounded in the space Since v ∈ L p loc ((0, T ) × R 3 ) for any p, we conclude that b, b n belong to L p loc ((0, T ) × R 3 ) for any 1 ≤ p < 3 2 , with uniform bound on the norms.
Proof of the equi-boundedness of (2.23) We observe that |Z n | ≤ |X n | + |V n | ≤ |x| + (1 + T )|V n | . Thus it suffices to prove the equi-boundedness of (4.16) for the regularised flow V n . This is a byproduct of the proof of Lemma 4.1, where we show that the constant A depends on quantities which are uniformly bounded in n.
Conclusion of the proof of Theorem 1.1: existence of Lagrangian solutions to the Vlasov-Poisson system
Let f 0 be as in Theorem 1.1. In order to prove existence of a Lagrangian solution to system (1.2)-(1.3), we use a compactness argument. For each n, we consider the initial datum f n 0 defined in (4.1), which converges to f 0 . The result in [24] ensures existence and uniqueness of the classical Lagrangian solution f n , (ξ n , η n ) to the Vlasov-Poisson system with point charge where (ξ n (t), η n (t)) evolves according to ξ n (t) = η n (t) , η n (t) = E n (t, ξ n (t)) , (ξ n (0), η n (0)) = (ξ 0 , η 0 ) .
(4.10)
Therefore, there exists a unique flow Z n = (X n , V n ) : . From Subsection 4.1, there exists Z such that Z n → Z in measure, with respect to µ = f 0 L 6 . Therefore we define a density f which is the push forward of the initial data f 0 through the limiting flow Z, i.e.
The aim of this subsection is to verify that the above defined f is indeed a solution to (1.2)-(1.3). In other words, we want to perform the limit n → ∞ in (4.9)-(4.10) and get (1.2)-(1.3). This will conclude the proof of Theorem 1.1. To this end we observe that, up to subsequences: • f n ⇀ f weakly in L 1 x,v and weakly * in L ∞ x,v , uniformly in t. Indeed, f n 0 → f 0 in L 1 x,v and Z n → Z in measure µ. Since the latter limit is uniform in s and t, we define the inverse of the flow Z −1 n (t, s, x, v) := Z n (s, t, x, v) and observe that Z −1 n → Z −1 in measure and therefore µ-a.e., uniformly in t. Given ϕ ∈ C c (R 3 × R 3 ), we can estimate The first term in the r.h.s. converges to zero, since Z n → Z µ-a.e. The second term also converges to zero because ϕ is bounded and f n 0 → f 0 in L 1 x,v . Moreover, since f n is equi-bounded in L 1 x,v ∩ L ∞ x,v , uniformly in t, we obtain weak convergence in L 1 x,v and weak * convergence in L ∞ x,v of f n to f , uniformly in t.
• ρ n ⇀ ρ weakly in L 1 x . It follows from the weak L 1 x,v convergence of f n to f . Moreover, thanks to Remark 2, ρ n ⇀ ρ weakly in L s x , for some s > 3.
• ∂ t f n converges to ∂ t f in D ′ and v · ∇ x f n converges to v · ∇ x f in D ′ .
• E n → E uniformly. This is a consequence of Proposition 3.7. Indeed, the r.h.s. of equation (3.15) is uniformly bounded in n. Therefore, by Proposition 3.4, ρ n L m+3 3 is uniformly bounded and Proposition 3.1 yields {E n } n equi-Hölder. Ascoli-Arzelà Theorem guarantees the existence of a uniformly convergent subsequence. The limit couple (E, ρ) satisfies E(t, x) = x−y |x−y| 3 ρ(t, y) dy, since E ∈ M 3/2 and decays at infinity, while ρ ∈ L s , for some s > 3.
, and by the facts that E n → E uniformly and f n ⇀ f weakly in L 1 x,v . We are left with the part of the system (4.9)-(4.10) which involves the point charge. In particular, we define γ n (t) = (ξ n (t), η n (t)) (4.11) and set (ξ(t), η(t)) := lim n→∞ γ n (t) . (4.12) Observe that the limit in (4.12) exists. Indeed, γ n (t) is equi-Lipschitz because of the following estimate: where Lip(γ n ) is the Lipschitz constant of γ n . Proposition 3.3 yields a uniform bound on the first term in the r.h.s. of (4.13), that combined with the uniform bounds on E n proved in this subsection, implies γ n equi-Lipschitz. By Ascoli-Arzelà Theorem, there exists a subsequence {(ξ n k (t), η n k (t))} k which converges uniformly to (ξ(t), η(t)). To perform the limit in (4.9)-(4.10), we observe that • (ξ n (t),η n (t)) → (ξ(t),η(t)). Indeed, (ξ n (t), η n (t)) converges to (ξ(t), η(t)) uniformly and 14) The first term in the r.h.s. of (4.14) converges to zero uniformly. As for the second term, we use that Combining the facts that E n → E, ξ n → ξ and E is uniformly continuous, the last line in (4.15) vanishes as n → ∞.
• F n → F in L 1 x, loc . Indeed, F n → F pointwise, by the uniform convergence of ξ n (t) to ξ(t) up to subsequences, and F n , F ∈ L 1 loc (R 3 ). Therefore, we conclude by Dominated Convergence's Theorem.
• F n · ∇ v f n → F · ∇ v f in D ′ . This follows by rewriting F n · ∇ v f n = div v (F n f n ) and F · ∇ v f = div v (F f ), and by the facts that F n → F in L 1 loc (R 3 ) and f n *
For Φ 4 we compute Since the denominator of the integrand is bounded, we can estimate the above quantity as follows: where in the last inequality we used Proposition 3.6. Thus, condition (4.16) is satisfied and the proof is completed thanks to (4.17). | 9,634.8 | 2017-05-23T00:00:00.000 | [
"Mathematics"
] |
Face Recognition Using Double Sparse Local Fisher Discriminant Analysis
Local Fisher discriminant analysis (LFDA) was proposed for dealing with the multimodal problem. It not only combines the idea of locality preserving projections (LPP) for preserving the local structure of the high-dimensional data but also combines the idea of Fisher discriminant analysis (FDA) for obtaining the discriminant power. However, LFDA also suffers from the undersampled problem as well as many dimensionality reduction methods. Meanwhile, the projection matrix is not sparse. In this paper, we propose double sparse local Fisher discriminant analysis (DSLFDA) for face recognition. The proposed method firstly constructs a sparse and data-adaptive graph with nonnegative constraint. Then, DSLFDA reformulates the objective function as a regressiontype optimization problem.The undersampled problem is avoided naturally and the sparse solution can be obtained by adding the regression-type problem to a l1 penalty. Experiments on Yale, ORL, and CMU PIE face databases are implemented to demonstrate the effectiveness of the proposed method.
Introduction
Dimensionality reduction tries to transform the high-dimensional data into lower-dimensional space in order to preserve the useful information as much as possible.It has a wide range of applications in pattern recognition, machine learning, and computer vision.A well-known approach for supervised dimensionality reduction is linear discriminant analysis (LDA) [1].It tries to find a projection transformation by maximizing the between-class distance and minimizing the within-class distance simultaneously.In practical applications, LDA usually suffers from some limitations.First, LDA usually suffers from the undersampled problem [2]; that is, the dimension of data is larger than the number of training samples.Second, LDA can only uncover the global Euclidean structure.Third, the solution of LDA is not sparse, which cannot give the physical interpretation.
To deal with the first problem, many methods have been proposed.Belhumeur et al. [3] proposed a two-stage principal component analysis (PCA) [4] + LDA method, which utilizes PCA to reduce dimensionality so as to make the within-class scatter matrix nonsingular, followed by LDA for recognition.However, some useful information may be compromised in the PCA stage.Chen et al. [5] extracted the most discriminant information from the null space of within-class scatter matrix.However, the discriminant information in the nonnull space of within-class scatter matrix would be discarded.Huang et al. [6] proposed an efficient null-space approach, which first removes the null space of total scatter matrix.This method is based on the observation that the null space of total scatter matrix is the intersection of the null space of betweenclass scatter matrix and the null space of within-class scatter matrix.Qin et al. [7] proposed a generalized null space uncorrelated Fisher discriminant analysis technique that integrates the uncorrelated discriminant analysis and weighted pairwise Fisher criterion for solving the undersampled problem.Yu and Yang [8] proposed direct LDA (DLDA) to overcome the undersampled problem.It removes the null space of betweenclass scatter matrix and extracts the discriminant information that corresponds to the smallest eigenvalues of the withinclass scatter matrix.Zhang et al. [9] proposed an exponential discriminant analysis (EDA) method to extract the most discriminant information which is contained in the null space of the within-class scatter matrix.
To deal with the second problem, many methods have been developed for dimensionality reduction.These methods focus on finding the local structure of the original data space.Locality preserving projections (LPP) [10] was proposed to find an embedding subspace that preserves local information.One limitation of LPP is that it is an unsupervised method.Because the discriminant information is important to the classification tasks, some locality preserving discriminant methods have been proposed.Discriminant locality preserving projection (DLPP) [11] was proposed to improve the performance of LPP.Laplacian linear discriminant analysis (LapLDA) [12] tries to capture the global and local structure of the data simultaneously by integrating LDA with a locality preserving regularizer.Local Fisher discriminant analysis (LFDA) [13] was proposed to deal with the multimodal problem.It combines the ideas of Fisher discriminant analysis (FDA) [1] and LPP and maximizes between-class separability and preserves within-class local structure simultaneously.In LDA, the dimension of the embedding space should be less than the number of classes.This limitation can be solved by using the LFDA algorithm.
To deal with the third problem, many dimensionality reduction methods integrating the sparse representation theory have been proposed.These methods can be classified into two categories.The first category focuses on finding a subspace spanned by sparse vectors.The sparse projection vectors reveal which element or region of the patterns is important for recognition tasks.Sparse PCA (SPCA) [14] was proposed by using the least angle regression and elastic net to produce sparse principal components.Sparse discriminant analysis (SDA) [15] and sparse linear discriminant analysis (SLDA) [16] were proposed to learn a sparse discriminant subspace for feature extraction and classification in biological and medical data analysis.Both methods try to transform the original objective into a regression-type problem and add a lasso penalty to obtain the sparse projection axes.One disadvantage of these methods is that the number of sparse vectors is at most − 1. is the number of class.The second category focuses on the sparse reconstructive weight among the training samples.Graph embedding framework views many dimensionality reduction methods as the graph construction [17].The -nearest neighbor and the -ball based methods are two popular ways for graph construction.Instead of them, Cheng et al. built the ℓ 1 -graph based on sparse representation [18].The ℓ 1 -graph has proved that it is efficient and robust to data noise.ℓ 1 -graph based subspace learning methods include sparse preserving projections (SPP) [19] and discriminant sparse neighborhood preserving embedding (DSNPE) [20].
Motivated by ℓ 1 -graph and sparse subspace learning, in this paper, we proposed double sparse local Fisher discriminant analysis (DSLFDA) for multimodal problem.It measures the similarity on the graph by integrating the sparse representation and nonnegative constraint.To obtain sparse projection vectors, the objective function can be transformed into a regression-type problem.Furthermore, the space spanned by the solution of regression-type problem is identical to that spanned by the solution of original problem.The proposed DSLFDA has two advantages: (1) it remains the sparse characteristic of ℓ 1 -graph; (2) to enhance the discriminant power of DSLFDA, the label information is used in the definition of local scatter matrices.Meanwhile, the projection vectors are sparse, which can make the physical meaning of the patterns clear.The proposed method is applied to face recognition and is examined using the Yale, ORL, and PIE face databases.Experimental results show that it can enhance the performance of LFDA effectively.
The rest of this paper is organized as follows.In Section 2, the LFDA algorithm is presented.The double sparse local Fisher discriminant analysis algorithm is proposed in Section 3. In Section 4, experiments are implemented to evaluate our proposed algorithm.The conclusions are given in Section 5.
Related Work
In this section, we give a brief of LDA and LFDA.Given a data set = [ 1 , 2 , . . ., ] ∈ R × with each column corresponding to a data sample, ∈ R (1 ≤ ≤ ), the class label of is set to ∈ {1, 2, . . ., }, and is the number of classes.We denote as the number of samples in the th class.Dimensionality reduction tries to map the point ∈ R into ∈ R ( ≪ ) by the linear transformation: The above transformation can be written as matrix form: where = [ 1 , 2 , . . ., ] ∈ R × .
2.1.Linear Discriminant Analysis.Linear discriminant analysis tries to find the discriminant vectors by the Fisher criterion, that is, the within-class distance is minimized and the between-class distance is maximized simultaneously.The within-class scatter matrix and between-class scatter matrix are, respectively, defined as follows: where is the data set of class . is the mean of the samples in class and is the mean of the total data.LDA seeks the optimal projection matrix by maximizing the following Fisher criterion: The above optimization is equivalent to solving the following generalized eigenvalue problem: consists of the eigenvectors of −1 corresponding to the first largest eigenvalues.
Local Fisher Discriminant Analysis.
Local Fisher discriminant analysis (LFDA) is also a discriminant analysis method.It aims to deal with the multimodal problem.The local within-class scatter matrix lw and the between-class scatter matrix lb are defined as where is the affinity matrix. = exp(−‖ − ‖ 2 / ), and is the local scaling of defined by = ‖ − () ‖ where () is the th nearest neighbor of .
The objection function of LFDA is formulated as where tr(⋅) is the trace of a matrix.The projection matrix can be obtained by calculating the eigenvectors of the following generalized eigenvalue problem: Because of the definition of matrix , LFDA can effectively preserve the local structure of the data.
Double Sparse Local Fisher Discriminant Analysis
where = [ 1 , . . ., ,−1 , 0, ,+1 , . . ., , ] is a -dimensional vector in which the th element is equal to zero. 1 ∈ R is a vector of all ones.The ℓ 1 -minimization problem (10) can be solved by many efficient numerical algorithms.In this paper, the LARS algorithm [21] is used for solving problem (10).The matrix can be seen as the similarity measurement by setting the matrix = (+ )/2.Therefore, the new local scatter matrices can be defined as follows: where lw and lb are the weight matrices and defined as The final objective function is described as follows: The optimal projection can be obtained by solving the following generalized eigenvalue problem: When the matrix lw is nonsingular, the eigenvectors are obtained by the eigendecomposition of matrix ( lw ) −1 lb .However, the projection matrix is not sparse.
Mathematical Problems in Engineering 3.2.Finding the Sparse Solution.We first reformulate formulas ( 11) and ( 12) in matrix form.Consider where (lb) is the diagonal matrix and the th diagonal , (lb) = (lb) − (lb) .Similarly, formula ( 12) can be expressed as where (lw) = (lw) − (lw) , () is the diagonal matrix, and the th diagonal element is .Matrices (lb) and (lw) are always symmetric and positive semidefinite; therefore, the eigendecomposition of (lb) and (lw) can be expression as follows: where Σ and Σ are the diagonal matrices.Their diagonal elements are the eigenvalues of matrices (lb) and (lw) , respectively.So lb and lw can be rewritten as where = Σ 1/2 and = Σ 1/2 .The following result which was inspired by [14,16] gives the relationship between problem (10) and the regressiontype problem.
Theorem 1. Suppose that is positive definite; its Cholesky decomposition can be expressed as = , where ∈ R × is a lower triangular matrix.Let = [V 1 , V 2 , . . ., V ] be the eigenvector of problem (15) where > 0 and (:, ) is the th column of .Then the columns of span the same linear space as well as those of .
To obtain sparse projection vectors, we add a ℓ 1 penalty to the objective function (20): Generally speaking, it is difficult to compute the optimal and simultaneously.An iterative algorithm was usually used for solving problem (21).For a fixed , there exists an orthogonal matrix P such that [, P] is × column orthogonal matrix.Then the first term of (21) which is subject to = .The optimal solution can be obtained by computing the singular value decomposition and = .The algorithm procedure of DSLFDA is summarized as follows.
Input: the data matrix .
Output: the sparse projection matrix .
(3) Initialize matrix as an arbitrary column orthogonal matrix.
Experimental Results
In this section, we use the proposed DSLFDA method for face recognition.Three face image databases, that is, Yale [22], ORL [23], and PIE [24], are used in the experiments.We compare our proposed algorithm with PCA, LDA, LPP, LFDA, SPCA, SPP, DSNPE, and SLDA.For simplicity, we use nearest neighbor classifier for classification task and the Euclidean metric is used as the distance measure.our experiments, the face region of each original image was cropped based on the location of eyes.Each cropped image was resized to 32 × 32 pixels.Figure 1 shows the cropped sample images of two individuals from the Yale database.
In the first experiment, we randomly select ( = 2, 3, 4, 5, 6) images per subject for training and the remaining images are for testing.10 time runs were implemented for stable performance.The average rates are used as the final recognition accuracies.For LFDA, the parameter is set to −1 for simplicity.LPP is implemented in supervised model.For SPCA, we manually choose the sparse principal component in order to obtain the best performance.Table 1 shows the recognition accuracies of different methods with the corresponding dimension.
In the second experiment, we experiment with different dimensionalities of the projected space.Five images per individual were randomly selected for training, and the remaining images were used for testing.Figure 2 shows the performance of different methods.expressions.The original size of the images is 243×320 pixels.The images were manually cropped and resized to 32 × 32 pixels.Figure 3 shows the cropped sample images of two individuals from the ORL database.
Experiment on the ORL Face
In the first experiment, we randomly select ( = 2, 3, 4, 5, 6) images per subject for training and the remaining images are for testing.10 time runs were implemented for stable performance.The average rates are used as the final recognition accuracies.The experimental parameters were set as in Section 4.1.
In the second experiment, we experiment with different dimensionalities of the projected space.Five images per individual were randomly selected for training, and the remaining images were randomly selected for testing.Figure 4 shows the performance of different methods (Table 2).
Experiment on the PIE Face Database.
The CMU PIE face database contains 41368 images of 68 individuals.The images were captured under 13 different poses, under 43 different illumination conditions, and with 4 different expressions.In our experiments, we choose a subset (C29) that contains 1632 images of 68 individuals.These were manually cropped and resized to 32 × 32 pixels.Figure 5 shows the cropped sample images of two individuals from CMU PIE database.In the first experiment, we randomly select ( = 3, 6, 9, 12, 15) images per subject for training and the remaining images are for testing.10 time runs were implemented for stable performance.The average rates are used as the final recognition accuracies.The experimental parameters were set as in Section 4.1.Table 3 shows the recognition accuracies of different methods with the corresponding dimension.
In the second experiment, we experiment with different dimensionalities of the projected space.Fifteen images per individual were randomly selected for training, and the remaining images were randomly selected for testing.Figure 6 shows the performance of different methods.data into a low-dimensional subspace whose dimensionality is larger than the number of classes.
Discussion and Conclusion
(3) From Table 3, LPP and SLDA outperform LFDA on the CMU PIE database.However, DSLFDA can achieve better performance than other methods.This point shows that DSLFDA improves not only the performance of LFDA but also the performance of sparse-based method, such as SLDA.The proposed DSLFDA algorithm constructs the graph on the original data and obtains the nonnegative similarity measurement.This is different from SPP and DSNPE.
(4) From the experimental results, we obtain that SPP can get competitive performance on CMU PIE database, rather than ORL and Yale databases.The reason may be that the sparse representation needs abundant training samples.Conversely, the nonnegative similarity measurement in DSLFDA is adaptive and can overcome the drawback of sparse representation.
(5) DSNPE can be regarded as an extension of SPP.It can extract the discriminant information and perform better than SPP.On the Yale database, DSNPE can achieve the best recognition performance when the training samples per individual are four and five.
Conclusion.
In this paper, we proposed a sparse projection method, called DSLFDA, for face recognition.It defines a novel affinity matrix that describes the relationships of points on the original high-dimensional data.The sparse projection vectors are obtained by solving the ℓ 1 -optimization problem.Experiments on Yale, ORL, and CMU PIE face databases indicate that DSLFDA can get competitive performance compared to other dimensionality reduction methods.We only focus on supervised learning in this paper.Because a large amount of unlabeled data is available in practical applications, semisupervised learning has attracted much attention in recent years [25][26][27].One of our future works is to extend our approach under the semisupervised learning framework.On the other hand, DSLFDA needs the local within-class scatter matrix positive definite.We add an identity matrix to the local within-class scatter matrix for regularization.This may motivate us to find the regularization method to approximate the local within-class scatter matrix well.
Figure 1 :
Figure 1: Sample images of two individuals from Yale database.
4. 1 .Figure 2 :
Figure 2: The recognition performance versus different dimensions on the Yale database.
Database.The ORL database contains 400 images of 40 individuals.Each individual has 10 images.The images were captured at different times, under various light conditions, and with different facial
Figure 3 :
Figure 3: Sample images of two individuals from ORL database.
Figure 4 :
Figure 4: The recognition performance versus different dimensions on the ORL database.
Figure 5 :
Figure 5: Sample images of two individuals from CMU PIE database.
Figure 6 :
Figure 6: The recognition performance versus different dimensions on the CMU PIE database.
Table 1 :
The top recognition rates (%) and the corresponding dimensions on Yale database by different methods (mean ± std).
Table 2 :
The top recognition rates (%) and the corresponding dimensions on ORL database by different methods (mean ± std).
Table 3 :
The top recognition rates (%) and the corresponding dimensions on CMU PIE database by different methods (mean ± std). | 4,071.2 | 2015-03-26T00:00:00.000 | [
"Computer Science"
] |
TIME REDUCTION EFFECTS OF STEEL CONNECTED PRECAST CONCRETE COMPONENTS FOR HEAVILY LOADED LONG-SPAN BUILDINGS
The characteristics of large logistics buildings are their long spans and the ability to take heavy loads. Usually,
PC components are used for their frames to ensure quick construction. However, the erection of most pin jointed PC structures increases
the time and the cost incurred for ensuring structural stability and construction safety. To solve this problem, “smart” frames have been
developed, which have tapered steel joints at both ends of the PC components. A smart frame with the moment frame concept not only assures
structural stability and construction safety, but it also simplifies and quickens the erection because of its tapered joint detail. The purpose
of this study is to compare the erection time and cost effects of the steel connected PC components for heavily loaded long-span logistics
buildings with the existing PC frames. For this study, we selected a logistics building constructed with PC components and redesigned it as
the smart frame, and the erection simulations were performed. We analyzed the time reduction effects of the smart frame. Our results confirmed
that the use of the smart frame reduced the erection time and cost practically. Our investigations will help develop the erection simulation
algorithms for smart frames.
Introduction
A rapid increase in the online sales of large retailers has increased the demand for large logistics buildings worldwide. The characteristics of these buildings are their long spans, large floor heights, and ability to take heavy loads. Mostly, precast concrete (PC) components are adopted for businesses that need to open quickly (Rajagopal, 2010;S. H. Kim, Choi, S. K. Kim, & Lee, 2010). Most PC frames having heavily loaded long-span logistics buildings are designed with pin joints, which are installed by using a simple mounting (Elliott & Jolly, 2013). Consequently, structural stability and construction safety problems might occur during PC member erection (Fathi, Parvizi, Karimi, & Afreidoun, 2018). Solving this problem would increase the construction time and costs (Hong, G. Kim, Lim, & S. Kim, 2017). In other words, a large amount of equipment and human resources are necessary to safely connect the girder with the column, which makes time a critical factor. This problem can be easily solved by using "smart" frames installed with tapered steel joints similar to a steel structure at both ends of the precast concrete components (Lee, S. E. Kim, G. H. Kim, Joo, & S. K. Kim, 2011). Similar to steel structures, smart frames that involve the moment frame action not only secure the structural stability and the construction safety during the erection process (which is the problem of pin-joint PC frames), but their tapered joint detail also makes the erection easier and quicker than regular frame constructions (Son, Lim, & Kim, 2018;Kim et al., 2010). In particular, multiple studies have proven that smart frames are superior to existing PC frames in terms of their structural stability, construction safety, and economic feasibility (Joo, S. E. Kim, G. J. Lee, S. K. Kim, & S. H. Lee, 2012a;S. Kim, Hong, J. H. Kim, & J. T. Kim, 2013a). When smart frames with the above-mentioned advantages are adopted (instead of PC frames) for heavily loaded long-span logistics buildings, cost reduction and shortened construction time is expected to be realized.
The purpose of this study is to compare the erection time and cost effects of the steel connected PC components (i.e., the smart frames) for heavily loaded longspan logistics buildings with the existing PC frames. We used the case study of a logistics building constructed with pin-joint PC components. Then, the same building was redesigned using a smart frame, and the installation simulations were performed by considering the site conditions. Then, we compared the results. We first considered the erection method of heavily loaded long-span logistics buildings. Second, we examined the characteristics of the smart frame and the erection process in comparison with the pin-joint PC frame method. Third, we chose a logistics building constructed with the pin-joint PC frame method as the case study, and the erection simulations were implemented by adopting the smart frame. Then, we compared the erection time of the smart frames with the erection time when the pin-joint PC frames were used. Fourth, we analyze the cost reduction associated with the time reduction and discuss the study results.
Preliminary study 1.PC erection of heavily loaded long-span buildings
We used PC erection for heavily loaded long-span buildings to shorten the construction time without being seriously impacted by the climate and weather changes (Lee, Lim, & Kim, 2016). Time reduction was found to be effective for quick business openings and for a reduced payback period (Son et al., 2018). In addition, it is difficult and dangerous to perform temporary work, such as formworks, because the floor of the logistics buildings is very high. Thus, it is advantageous to adopt PC erections that may minimize the amount of formwork (Kim et al., 2010). In general, there are three types of PC component erections (see Figure 1).
As illustrated in Figure 1(a), a floor-by-floor erection secures the structural stability of buildings because the joint concrete is poured after installing the columns, gird-ers, and slabs required for each floor. Then, the upper floor members are repeatedly mounted, which makes the activities sequence critical (Son et al., 2018). Most pin-joint PC frames are erected in this way. Figure 1(b) shows a cascading erection in which the members are piled in cascades within the coverage of a crane boom. This method is applied when there is not enough time for floor-by-floor erection. Although cascading erection is effective in reducing the time, it is difficult to erect all the PC components using the cascading method because the coverage of the crane boom is limited. Figure 1(c) demonstrates a section-by-section erection. Unlike other erection methods, the PC components of a specific section are erected on all floors, which make it easier to ensure the efficiency of the equipment operation and the work space (Lim, Joo, Lee, & Kim, 2011). Furthermore, the components can be erected in all directions when a crane is accessible (Kim et al., 2010). In the case of pin-joint PC frames, mostly floor-by-floor erection is adopted; cascading erection is adopted for reducing the construction time (Nawy, 2008). However, when moment frames, such as smart frames, are erected, any of the three erection methods may be applied.
Smart frame
During the erection of pin-joint PC frames, structural stability and construction safety should be secured; therefore, there are limits to the erection method and a substantial amount of time is required for the erection (Polat, 2008). Moreover, it is difficult to calibrate the errors that may occur when pouring joint concrete after placing the girders on the top of columns (Arditi, Ergin, & Günhan, 2000). As shown in Figure 2, smart frames with tapered steel connections of columns and girders not only secure a rapid and precise erection, but they also provide structural stability as soon as the connection is made (Hong, Park, Kim, & Nzabonimpa, 2016;. As illustrated in Figure 2(a), we used bolts to connect the composite precast concrete (CPC) components of the columns and girders similar to steel structure . As shown in Figure 2(b), the steel can be arranged only on the joints of columns and girders for general buildings within a span of approximately 8 m. Therefore, this structure has the advantages of both reinforced concrete (RC) and steel frame structures (Nawy, 2008). All the joints are connected with bolts; therefore, the structural performance of smart frames is similar to that of steel structures. Also, we poured the concrete into the joints after erection, which made it a moment frame (Polat, 2008;Arditi et al., 2000). Table 1 shows a comparison between the conventional PC frames and the smart frames that use steel-jointed CPC components. The conventional PC frames are structurally pin jointed, whereas the smart frames are momentjointed similar to the steel frames Lee, Park, Lim, & Kim, 2013).
We erected a conventional PC frame by using the floor-by-floor method, as shown in Figure 1(a), mainly for structural stability and construction safety. We also partially erected a frame by using the cascading method, as shown in Figure 1(b), for shortening the construction time. Therefore, there were many constraints in the planning for the crane path (Arditi et al., 2000). However, a smart frame is available in all three erections (see Figure 1); therefore, crane path planning can be easily and diversely established (Joo et al., 2012). Resistance to the lateral force of the PC frame is made up of heavy RC shear walls and/or cores and braces, whereas the smart frame itself makes resistance to the lateral force similar to that of steel frames (Holden, Restrepo, & Mander, 2003). Therefore, in the case of PC frames, the building cores generally act as the shear walls resisting the lateral forces, whereas these cores act only as vertical passageways in smart frames . Consequently, the RC cores of the PC frame are very thick and designed for heavy reinforcement, and in most cases, they are scheduled as critical activities during construction (H. K. Choi, Y. C. Choi, & C. S. Choi, 2013). However, the cores of the smart frame are simple structures that support their own weight; therefore, they are constructed independently and quickly during the CPC erection (K. H. Kim, T. O. Lee, S. H. Lee, & S. K. Kim, 2012). The PC frame requires heavy and expensive PC slabs, such as Double-T, plastic ribbed slabs, and so on (Casadei, Nanni, Alkhrdaji, & Thomas, 2005; Yardim, Waleed, Jaafar, & Laseima, 2013). Howev-er, relatively light and cheap deck plates can be used for the smart frames . Finally, during the erection process, the PC frame requires temporary lateral support for construction safety and structural safety, but the smart frame only needs props to maintain verticality (Hurst, 2017;Hong et al., 2008).
Advantages of a smart frame
A smart frame has three main advantages: (i) expanded available space, (ii) construction time reduction, and (iii) increased convenience in erection Kim et al., 2010;Lim et al., 2011). First, the sectional sizes of the CPC components in the smart frames are relatively smaller than the sizes of the PC components for the same design conditions; this results in an increase in the available space (Lim et al., 2011). As shown in Figure 2(a), when a steel frame is buried in the entire span, the sectional sizes of the columns and beams will be approximately 20% less than the sectional sizes of the columns and beams in the existing PC structure (Hong et al., 2009). The available space will be larger than the existing PC structure whereas the structural performance would remain the same. In addition, the amount of concrete and forms decrease as the sectional size is reduced, but the quantity of the steel frames increases. As a result, the material and production Tapered steel connection cost of the CPC components was approximately 2% to 3% lower as compared with that of the PC components (Hong et al., 2010a;Hong, G. Lee, S. Lee, & Kim, 2014). Second, for heavily loaded long-span buildings, there were fewer critical activities when smart frames were adopted instead of the existing pin-jointed PC frames (see Figure 3); this significantly reduced the construction time (Lee et al., 2016). As shown in Figure 3(a), all the processes were critical ranging from the column erection to the completion of the curing after casting the topping concrete because of structural stability. However, the smart frame can secure structural stability when completing steps 1 to 3 shown in Figure 3(b). In Figure 3(b), "Step 3. Installing PC slabs or deck plates" is listed as a critical activity for construction safety rather than structural stability.
Step 3 may be skipped for further time reduction but any risk arising because of the lack of working space should not be permitted. For reference, the activities omitted from Figure 3(b) when compared with those in Figure 3(a) are not critical activities, and they can be performed during the upper CPC erection.
As shown in Figure 3(a), plumbing with propping for the PC columns of the PC frame needs to be done first, and then bottom grouting should be performed for structural stability. The upper girders need to be installed first, and the PC slabs or the deck plates are to be installed subsequently. Then, the joint concrete is poured to bind the column and girder components to ensure structural stability. After installing the remaining slab rebar, the topping concrete is poured, and curing is executed. All these are critical activities. For reference, steps 6 and 7 may be omitted, and step 8 may be performed for reducing the construction time. However, to perform step 8, the lower part of the upper PC columns should be filled with the padding concrete; the filling needs to be as thick as the slab. Third, the smart frame not only secures a quicker and more precise erection than the steel frame structure owing to the tapered connection with the L-shaped steel guide (see Figure 4), but it also ensures improved structural stability and construction safety as compared with the existing PC frame. The study conducted by Hong et al. (2017) details the engineering principles and the effects related to the tapered connection with the L-shaped steel guide. As shown in Figure 4, the steel web section located at both ends of a girder was inserted into the L-shaped plates pre-installed in the T-type bracket of a column, as shown in Figure 4(a). When the crane shackle is quickly and safely set because of the girder's gravity load (see Figure 4(b)), it is immediately removed for the next erection. The crane lifts another girder while bolting is performed, as shown in Figure 4(c).
As illustrated in Figure 4(d), the L-shaped reinforcement plates with tapered webs of girder steels have three roles . First, the flange reinforcement performs a temporary safe receipt of the girder's web steel in the reinforcement plates when an axial eccentricity of the beam occurs while approaching the girder. Furthermore, these plates support the weight of the girder after the girder reaches the setting location. Second, the web reinforcement acts as a guide to axially align the column and the beam steel, which leads it to the exact setting location along the web section. Third, the rounded corner of the reinforcement plates slide and set the girder steel inside the Step 1. Erecting PC columns Step 2. Grouting the bottom of columns Step 3. Installing girders Step 4. Installing PC slabs or deck plates Step 5. Filling joint concrete of columns, girders and slabs components Step 6. Casting topping concrete of slab Step 7. Curing topping concrete Step 8. Erecting upper PC columns Step 1. Erecting CPC columns Step 2. Installing girders Step 8. Erecting upper CPC columns Step 3. Installing PC slabs or deck plates a) Pin-jointed PC frame b) SMART frame reinforcement plate along the slope because of the gravitational load of the beam. As a result, smart frames applied with the tapered connection of the L-shaped plate reduce the erection time and increase precision when compared with the steel frames and the existing PC frames. Furthermore, the structural stability and construction safety of the smart frames during erection are superior to those of the existing PC frames Joo et al., 2012a;Hong et al., 2010b). Finally, a section-by-section erection was applied for the smart frames, as shown in Figure 1 (c). In this case, the PC components could be erected within a smaller working radius close to the structure; therefore, a crane with less lifting capacity may be used as compared with the pin-joint PC erection. A small lifting capacity means better mobility, which results in quicker erection (Hong et al., 2009). The rental cost will be reduced as well.
The crane capacity was determined by the lifting load and the working radius. When the smart frame was applied, it was possible to have erection with a reduced lifting load and working radius; this makes it more advantageous than pin-joint PC erection in terms of the time and cost. However, CPC components should be made more precisely than convention PCs. This is because the accuracy of the erection cannot be secured if the connection steel is slightly misplaced during the CPC manufacturing process. And the erection of CPC components should follow the precision and process of steel erection. Table 2 gives a brief description of the case project selected for analyzing the effects of a smart frame application. The case project is a logistics building erected with pin-joint PC components, and it is characterized by a high floor and a heavy unit member. The PC components consisted of 942 columns, 1273 girders, and 3985 slabs. The length and weight of the general column were 9.2 m and 14.13 tons, respectively. The length and weight of the girder components were 11 m and 26.40 tons, respectively. The longest girder component was 23-m long and weighed 85.39 tons. The columns and girders were structured in the PC frames and the slabs in the PC with topping concrete. We designed 14 cores in the RC structure for resistance to the lateral force. The top floor including the roof was designed in a steel structure.
Brief description of the case project
The site condition of this case is shown in Figure 5(a). The logistics building was arranged with minimum free spaces on the site because its land was expensive. An adjacent building was located on the right side of the case site, and a 28-m-wide road was on its left side. The upper part of the site bordered a 16-m-wide road with a trench that was 5-m deep, and its lower part was close to a retaining wall that was 4-m high. Therefore, there was very limited space for the crane to move (see Figure 5(b)). As shown in Figure 6, we assumed five scenarios to establish several erection plans with the crane-moving path. In the final erection plan (see Figure 5(b)), we used three cranes (weighing 550 tons) for the erection of heavy and long PC components by considering the working radius.
It is possible to set up alternative PC erection plans by considering the site conditions, as shown in Figure 6. The alternatives illustrated in Figure 6 are evaluated by influencing factors, such as the time, cost, and crane path (see Table 3). As described in Table 3, Plan 1, Plan 3, and Plan 4 have the problems of the erection time being more than the time scheduled or budgeted because of the use of four cranes. As illustrated in Figure 5(a), no crane path was available because of the site conditions when Plan 2 was adopted. As a result, Plan 5 was chosen as the final plan that satisfies both the schedule and the budget, and it was divided into the three Zones A, B, and C, as shown in Figure 5(b). Then, each zone was sub-divided into A1-A9, B1-B10, and C1-C8 according to the erection schedule.
In site conditions where cranes are free to operate, three erection scenarios are generally possible, as shown in the Figure A above. In the case of Figure 7(a), the building is divided into 2 zones for erection. The two cranes are used to install the left and right sides simultaneously. In addition, it is possible to install up and down simultaneously. In the case of Figure 7(b), the building is divided into three zones and erection is performed simultaneously at one end of the building using three cranes. In terms of management, it is also possible to install CPCs from the bottom up or from one side to the other. Thus, four alternative scenarios are possible. In the case of Figure 7(c), the building is divided into 4 zones for erection. The four cranes are used to install the left and right sides at the same time. This method can be also installed up and down. As a result, Figure 7 is largely divided into three categories, but 8 alternative scenarios are possible depending on the management strategy.
Actual erection of the case building
We performed the actual erection of the case building based on the plan shown in Figure 5(b), and Figure 8 presents the monthly process. As shown in Figure 1(a), floor-by-floor erection was the first priority. However, cascade erection was applied for partial spans, as shown in Figure 1(b), to meet the tight schedule. A total of 172 calendar days were required to install all the PC components. Figure 8(a) shows the work status one month after the PC erection; three cranes erected some PC columns on the first floor and some girders and slabs on the second floor of the Zones A, B, and C (see Figure 5(b)). The work status after two months is shown in Figure 8 has been completed are installed with steel frames on the fourth floor.
In this study, we aim to analyze the reduction in erection time when a steel-joint smart frame is adopted instead of a pin-joint PC component, which requires 172 calendar days for erection. To do this, we changed the design from the application of the conventional PC frame to the smart frame.
Problem analysis of the PC erection schedule
The case site is characterized by the conventional PC frame (unlike the data in Table 1). When compared with the smart frame, the conventional frame lacks structural stability and construction safety during the erection process; this makes it less advantageous in terms of the time and cost. Considering the site conditions, we established alternatives (see Figure 6) and evaluated factors, such as the crane path, time, and cost, to decide a final erection plan (see Table 3). Figure 9 shows the actual schedule for PC erection and RC core works of the case study that has been set according to Figure 5(b). There is not much difference in the erection time among the Zones A1 to A5, B1 to B6, and C1 to C5. We can see that there is a significant difference in the time required to erect the PC components left at the crane paths after completing the erection of the zones. In other words, the PC erection time of the A6 to A9 zones differ from that of the B7 to B10 and C6 to C8 zones, which is approximately more than 3 weeks and 1 month. This is because there are more PC components left at the crane path of Zone A than Zones B and C, as shown in Figure 5(b). The section designed for the application of the steel frames should be completed before leaving the zone. Zone A6 needs to be erected after completing the steel frame (AS1) work of the upper A6 shown in Figure 5(b); therefore, there is a time interval of approximately two calendar weeks at the (M + 4)th month of Zone A, as shown in Figure 8. In addition, A9 can be erected only after the steel frame (AS2) work of the lower A8 is completed; therefore, there is a delay of 10 calendar days for the PC erection. The erection work was done 5 days per week for 8 hours each day.
Unlike the steel-joint smart frames, the PC frame erection schedule had three problems with respect to the erection time. First, the erection time of the unit PC component for the PC frame was longer than that of the CPC for the smart frame. In Figure 3, the installation of the PC columns and girders involved a series of critical activities, including joint grouting and filling, PC slab installation, and topping with concrete, to secure structural stability and construction safety. Moreover, unlike connected steel CPCs, a simple saddling connection was applied for the PC columns and girders. This connection itself was timeconsuming. For instance, the average erection periods of the unit column, girder, and slab for the PC frame were 39, 29, and 12 min, respectively. However, the average erection periods of the unit column, girder, and slab for the smart frame was 13, 10, and 12 min, respectively. As a result, the average amount of PC components that a crane could erect in one day was 5 columns, 6 girders, and 8 slabs for the PC frame. However, 12 columns, 12 girders, and 17 slabs could be erected for the smart frame. Second, floor-by-floor erection was our first priority for the PC frame to secure structural stability and construction safety; cascade erection was applied for partial spans. If section-by-section erection was adopted, the crane-moving path would drop more than threefold. For example, in Figure 8, if the first to the third floors of the A1 to A5 zones were to be erected, we would need to move the heavy-capacity crane back and forth in the zones at least three times; this would make the erection was more time-consuming. Also, additional time was required for erection at the A6-A9, B7-A10, and C6-A8 zones that were used as crane paths.
Third, the construction of the RC cores (see Figure 9) was the most time-consuming critical activity because the RC core walls of the PC frame were very thick and designed for heavy reinforcement to resist lateral forces. For reference, the floor height of the RC core was 10 m. Thus, the scaffolding safety needed to be performed inside and outside the core for the rebar and form works, and the concrete needed to be poured twice. Each floor required 17 days. These problems were previously mentioned in Table 1. We can expect the structural performance and erection of the smart frame to be similar to those of steel frames; therefore, the above-mentioned problems will be solved resulting in dramatic time reduction.
Erection time analysis of the smart frame
In this study, we compared the reduction in erection time between a smart frame and a frame using PC components. The case building had been initially designed using PC components; we redesigned it using a smart frame consisting of CPC components. The study was conducted in three steps. First, we established the alternatives to the erection plans that reflected the characteristics of the smart frame. Second, the erection time of each alternative was estimated and compared with that of the conventional PC frame. Third, we estimated the cost reduction arising from the time reduction. As shown in Figure 2 and Table 1, smart frames have features and advantages that are similar to those of steel frames. In addition, as shown in Figure 3, the components were erected with simpler activities and quickly fabricated using a tapered connection with an L-shaped steel angle (see Figure 4).
From Figure 6, we can see that five alternatives to the erection plans for the smart frame could be drawn under the same site conditions of the PC frame. Here Plan 2 was excluded because it was difficult to carry out the erection similar to the PC frame. The remaining erection plans are specified in Figure 10. We adopted section-by-section erection for the smart frame to minimize the crane-moving path; therefore, detailed zoning was possible, as shown in Figure 10. When the erection time was calculated based on the erection work that lasted five eight-hour days every week for each plan, Plan 3 had the shortest erection time (78 days); four cranes were used for erection. Three cranes were used for Plan 4 and Plan 5, which required 95 days and 89 days, respectively, for erection.
The smart frame reduced the erection time to half or more than half of erection time required for the PC frame, which needed 172 calendar days for erection (see Table 4). This was because the three problems stated for the PC frame erection schedule had been solved. In particular, as shown in Figure 5(b) and Figure 8, the PC frame applied with floor-by-floor and cascade erection plans use up the crane paths to secure structural stability during erection; this results in a longer erection time. However, a smart frame adopts section-by-section erection, as shown in Figure 10, which reduces the erection time. Figure 11 shows an erection simulation for Plan 5 of the smart frame with the shortest erection time when three cranes were used just like in the case project. Unlike Figure 8, Figure 11 does not have any activity that involves the crane paths.
Plans 1, 3, and 4 also show that the erection time was reduced to approximately 40~50% of the floor-by-floor erection time. In the case of the PC structure with a large floor area (see Figures 5 and 8), arranging cranes inside the building was unavoidable, and floor-by-floor erection was adopted for structural stability and construction safety. In such cases, it was confirmed that additional erection time related to the crane paths is required, as shown in Figures 8 and 9; this ultimately extends the construction time. (Lee et al., 2016;Kim et al., 2010). Additionally, when steel-joint CPC components of the smart frame were used instead of the PC frame, there was no increase in the material costs. In several studies, we found that the material costs decreased by approximately 2-3% (Joo, Kim, Lee, & Lim, 2012b;S. H. Lee, S. H. Kim, G. J. Lee, S. K. Kim, & Joo, 2012). This was because although the steel section may be added when the smart frame is applied instead of the PC frame, the rebar quantity decreases and the section size of the components reduces. This ultimately reduces the amount of concrete required (S. Kim, Hong, Ko, & J. T. Kim, 2013). It might be too complicated to calculate the cost reduction other than the direct; therefore, the study analyzes only the direct cost reduction that is simple and clear.
Above all, the erection time based on the calendar day stated in Table 4 should be changed to the working day to estimate the labor cost. For example, it takes 54 calendar days to complete Zone A of Plan 1 assuming that people work five days a week. However, the working days excluding holidays and rainy days are only 36 days, as shown in Table 5. Table 5 shows the erection time listed in Table 4, which is converted into working day units. The PC frame needs 311 crews, whereas smart frame requires a minimum of 174 crews, and a maximum of 183 crews. Next, the number of crew members should be checked to estimate human resources for each zone. The erection work of the case project required seven persons per crew, that is, one signal man, two persons for the ground floor, and four persons for the upper floor. For instance, Zone A of Plan 1 requires 36 crews × 7 persons/crew = 252 man-days (MDYs). The input human resources for each plant of the smart frame and the PC frame was calculated by using the above method, and the results are shown in Table 6. Figure 12 shows the erection schedule of the smart frame (Plan 5). It differs from the actual schedule of the case project illustrated in Figure 9. First, the smart frame was used for the erection plans in which the 1st to 3rd floors of each zone (i.e., Zones A1-A8, B1-B7, and C1-C8) were completed before moving onto the next zone. In the erection plans of the conventional PC frame, the first floors of all the zones are completed before moving onto the next floor. Second, none of the activities shown in Figure 12 used up the crane paths. such as A6-A9, B7-B10, and C6-C8 (see Figure 9). Third, the activities of the RC cores were relatively shorter in Figure 12 than in Figure 9. As stated previously, this was because the RC core walls of the PC frame were designed for resisting lateral forces with very thick walls and heavy reinforcement. However, the RC cores in the smart frame were simple vertical pass ways; therefore, they were designed with relatively thin walls and light reinforcement. Furthermore, the erection time of the CPC components was shorter than that of the PC components, which reduced the overall construction time.
Erection cost analysis of the smart frame
The cost reduction caused by the reduced erection time of the smart frame directly decreases the labor and equipment costs. Indirectly, this may reduce the site manage-It takes 2,177 MDYs for the PC frame. For the smart frame, the plane requires a minimum of 1,218 MDYs and a maximum of 1,281 MDYs. Table 7 shows the actual labor cost paid for the case project (168.14 USD/MDY). For instance, the estimated labor cost for Zone A of Plan 5 was 406 MDY × 168.14 USD/ MDY = 68,265 USD. The labor cost of the PC frame was 366,041 USD. However, the smart frame labor cost was in the range of 204,796~215,388 USD. Thus, the labor cost of the smart frame was 41.2~44.1% less than that of the PC frame.
The amount of equipment to be used should be calculated to estimate the equipment cost. Unlike the human resources calculation, which was based on the working day, the equipment quantity of heavy-duty cranes was calculated based on the calendar day. The equipment quantity and cost were decided based on the rental terms and conditions because heavy-duty cranes that exceeded 500 tons could not be transported as a single piece; they had to be disassembled before transportation. Then, they needed to be reassembled before use and disassembled thereafter for return. The equipment rental period was basically monthly for the case project, and the minimum rental period was weekly after more than one month. For example, when it took 32 calendar days for the work, the rental period was one month and one week. Table 8 shows the estimated result when the equipment quantity of the case project was calculated under the above conditions. As stated in Table 8, the equipment quantity of the PC frame for each zone was large. Unlike the smart frame that required 8.70~9.15 equipment months (EQMs), the PC frame needed 15.60 EQMs.
The equipment quantity in Table 8 was converted into the equipment cost, as shown in Table 9. For the PC frame, 909,728 USD was paid for the equipment. However, the estimated cost was in the range of 530,970~585,836 USD for each plan when the smart frame was adopted. Therefore, we can conclude that using the smart frame instead of the PC frame will reduce 35.6~41.6% of the cost. For reference, the monthly and weekly rental fees of the case project were 53,097 USD and 15,044 USD respectively. Also, the cost for mobilizing and demobilizing a crane was 17,699 USD. For instance, the equipment rental cost for Zone A of Plan 5 was 176,990 USD (53,097 USD/month × 3 months + 15,044 USD/week × 0 week + 17,699 USD). The labor and equipment cost reduction that result from the time reduction are shown in Table 10. It is estimated that the PC frame requires 1,275,769 USD, whereas the smart frame requires 735,765~796,515 USD. Therefore, we concluded that all the erection plans of the smart frame may result in a cost reduction of 37.6~42.3% when compared with the PC frame.
In particular, the erection time of Plan 3 that applied the smart frame (see Table 4) was only 78 days, which was the shortest; however, its cost reduction was the lowest (see Table 10) because four cranes were used. It took 89 days for the erection when Plan 5 was used (see Table 4) with only three cranes. Thus, the equipment was operated effectively under the rental conditions, which resulted in the lowest equipment cost, as shown in Table 9. As a result, the cost reduction of the smart frame against the PC frame was ultimately quite high. Although increasing the equipment quantity may shorten the erection time, the equipment cost accounts for a large portion of the erection cost, which results in lower cost reduction. In particular, we required an additional 17,699 USD for the mobilization and demobilization for every additional crane unit.
For reference, in terms of overall cost including PC production cost in plant and transportation cost, SMART frame has about 6.6% cost reduction effect compared to PC frame. The reason is that erection cost accounts for 15.5% of the overall cost in the case project, while in-plant PC production cost and transportation cost are 76% and 8.5% of the overall cost, respectively.
Discussion
As shown in Table 4, the erection time of the smart frame was reduced by 40~50% as compared with that of the PC frame. In addition, we concluded that the direct cost of the smart frame was approximately 40%, as shown in Table 10. According to the cost analysis of the case study, heavily loaded long-span logistics buildings use heavy-duty cranes that weigh approximately 500 tons; therefore, the impact of the equipment cost on the overall construction cost was greater than that of the labor costs. Therefore, it is desirable to use the least amount of equipment and simultaneously satisfy the required construction time. However, if the erection time is insufficient, the equipment quantity should be increased even when the partial cost may increase. In other words, time and cost are major factors.
The time-cost conflict was influenced by the erection plan, and the time and cost for each plan needed to be quickly estimated to support decision-making. However, it takes time and effort to estimate the erection time and cost suitable for various plans. In this study, we confirmed that the cost reduction corresponding to the time reduction can be defined by using a mathematical equation. Therefore, we can mathematically define the logical relationship that reflects the zoning of each erection plan, the human resources arrangement, and the equipment rental conditions. Figure 13 represents the erection plans illustrated in Figure 5(b) and Figure 10 in the matrix form. For instance, the erection times of Zones A, B, C, and D for the erection plan in Figure 10(a) corresponds to Z 11 , Z 21 , Z 31 , and Z 41 of Figure 13(a). Accordingly, the erection of A1-A5, B1-B11, C1-C4, and D1-D5 in Figure 10(a) is applicable to S 11 -S 51 , S 12 -S 112 , S 13 -S 43 , and S 14 -S 54 of Figure 13 here: (1), we calculated the erection cost by adding up the cost of each zone, and this cost was again estimated in the division of labor and equipment cost. The labor cost was calculated by multiplying the human resources in a working day by the unit rate of human resources, and the equipment cost was estimated using the equipment quantity and the rental fee in a calendar month and week, which included the mobilization and demobilization cost. Eqn (2) is applied to calculate the human resources in a working day, and the sum of Zone j in Figure 13(b).
Eqn (3) is the equipment quantity in a calendar month, and Eqn (4) is the equipment quantity in a calendar week in Zone j according to the equipment rental conditions. Eqn (5) is the conversion of the erection time in a working day into the erection time in a calendar day in which the calendar day per month is 30 days and the working day is 20 days. This is done because the equipment is rented based on the calendar day. For reference, the above equations are defined based on zones because one unit of crane is arranged for each zone.
In order to verify the proposed mathematical formulas, we apply them to the case of Plan 5 as follows.
When erection time of Plan 5 in a working day is calculated according to Figure 13(b), the result it is shown in Table 11. As shown in Table 11, the erection time and Eqn (2) calculate the manpower by Eqn (6). When manpower is estimated by the erection time of Table 11 and Eqn (2), the result is the same as Eqn (6).
In order to get the equipment cost, when calculating the equipment quantity of each zone in a calendar day by using Eqn (5), the results are the same as Eqns (7), (8) and (9) According to the equipment rental condition, when converting the results of Eqns (7), (8) and (9) to the monthly and weekly equipment quantities, the results are the same as Eqns (10) For reference, equipment rental conditions are divided monthly and weekly. And weekly rental cost is slightly higher than 1/4 of monthly rental cost. It is therefore advantageous to rent monthly if the equipment is used for more than three weeks. Rounddown, Roundup and Ceiling functions are used to calculate monthly and weekly equipment quantities to reflect these conditions. In other words, applying the Ceiling (EQD j , 30/4) in Eqns (10) and (11) means renting monthly if the equipment is used for more than three weeks. Plan 5 requires equipment for 87, 89 and 86 calendar days, such as Eqns (7), (8) and (9). In this case, it is advantageous to rent a total of 9 months for 3 months each, which means that it is advantageous not to do the weekly rent.
Finally, the result of calculating the total erection cost using Eqn (1) is the same as Eqn (12). For reference, as described in Section 3.2, the labor unit rate is 168.14 USD/ day and the equipment rental cost is 53,097 USD/month and 15,044 USD/week. And the cost for mobilizing and demobilizing a crane is 17,699 USD.
The maximum erection time of Plan 5 estimated by the proposed formula is 89 days and the erection cost is 735,765 USD. Since these results are the same as the ones in Table 4 and Table 10, the proposed formula is verified.
A variety of erection plans can be set for large PC buildings similar to the case project mentioned in the study, and each plan may have the time and cost conflicts as stated earlier. Therefore, it is very important to quickly check for the conflicts to decide the erection plan. In addition to confirm the time reduction provided by the smart frames, we obtained results to develop mathematical equations that accurately and quickly calculated the time and cost for the erection planning alternatives. The proposed equations may be used to effectively and quickly decide a final erection plan.
Conclusions
A smart frame is an erection technology developed to improve upon the disadvantages associated with the conventional PC frame. In this study, we analyzed the direct cost reductions resulting from a shortened erection time when a heavily loaded long-span logistics building designed using a conventional PC frame was replaced by a steel-joint smart frame. A case project was chosen for this analysis. Data on the actual time and cost input was compared with the simulation data of the smart frame that was proposed as an alternative. We confirmed the following in the study. Heavily loaded long-span logistics buildings with large floor areas were divided into zones and sections and arranged using several cranes for completion within a target date. Also, floor-by-floor erections were adopted for the pin-joint PC frame just like the case project to secure structural stability and construction safety. In this case, the crane path was used up, which increased the erection time. However, we performed a section-by-section erection of the steel-joint smart frame, and this significantly reduced the erection time. Furthermore, besides the matters specified in Table 1, the CPC components of the smart frame can be erected more quickly and safely with a tapered connection than with the PC components of a PC frame.
The following results were drawn from the study. First, we confirmed that the erection time of the smart frame was reduced by approximately 50% as compared with the erection time of the PC frame.
Second, we found that the direct cost decreased by 37.6~42.3% as compared with that of the PC frame because of the time reduction. To be more specific, the labor cost decreased by 41.2% to 44.1%, and the equipment cost decreased by 35.6% to 41.6%. Here, we excluded the indirect cost reduction owing to the reduced site management costs influenced by the time reduction and the payback period reduction of the investment because of early completion. If indirect cost reduction was included for analysis in the study, the cost reduction corresponding to the time reduction of the smart frame would be high.
Finally, we confirmed that large PC buildings have a wide range of erection alternatives, which may create a time-cost conflict. Therefore, the time and cost of the alternatives should be precisely and quickly calculated to support rational decision-making that satisfies the project conditions. In this study, we obtained additional results for developing the mathematical equations required to accurately and quickly analyze the time and cost conflicts.
The results of this study will contribute to providing time and cost reduction for smart frames used in heavily loaded long-span buildings designed with PC components. Academically, our results will help develop the erection simulation algorithms for smart frames. Furthermore, they will be useful for developing a simulation model to precisely and quickly estimate the conflict of time and cost in the erection plans of large PC buildings. | 10,537.8 | 2020-02-07T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
EERSM ) : Energy-Efficient Multi-Hop Routing Technique in Wireless Sensor Networks Based on Combination between Stationary and Mobile Nodes
In Wireless Sensor Network (WSNs), sensor nodes collect data and send them to a Base Station (BS) for further processing. One of the most issues in WSNs that researchers have proposed a hundred of technique to solve its impact is the energy constraint since sensor nodes have small battery, small memory and less data processing with low computational capabilities. However, many researches efforts have focused on how to prolong the battery lifetime of sensor nodes by proposing different routing, MAC, localization, data aggregation, topology construction techniques. In this paper, we will focus on routing techniques which aim to prolonging the network lifetime. Hence, we propose an Energy-Efficient Routing technique in WSNs based on Stationary and Mobile nodes (EERSM). Sensing filed is divided into intersected circles which contain Mobile Nodes (MN). The proposed data aggregation technique via the circular topology will eliminate the redundant data to be sent to the Base Station (BS). MN in each circle will rout packets for their source nodes, and move to the intersected area where another MN is waiting (sleep mode) to receive the transmitted packet, and then the packet will be delivered to the next intersected area until the packet is arrived to the BS. Our proposed EERSM technique is simulated using MATLAB and compared with conventional multi-hop techniques under different network models and scenarios. In the simulation, we will show how the proposed EERSM technique overcomes many routing protocols in terms of the number of hops counted when sending packets from a source node to the destination (i.e. BS), the average residual energy, number of sent packets to the BS, and the number of a live sensor nodes verse the simulation rounds. How to cite this paper: Alassery, F. (2019) (EERSM): Energy-Efficient Multi-Hop Routing Technique in Wireless Sensor Networks Based on Combination between Stationary and Mobile Nodes. Journal of Computer and Communications, 7, 31-52. https://doi.org/10.4236/jcc.2019.74004 Received: March 3, 2019 Accepted: April 16, 2019 Published: April 19, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/
Introduction
Wireless Sensor Networks (WSNs) composed a hundred of sensor nodes which are deployed in a harsh environment [1].Sensor nodes have many capabilities such as sensing, gathering, data processing and storing.These capabilities depend on batteries of sensor nodes which need to be extended in order to satisfy WSNs applications [1].Nodes sense or monitor the sensing filed, gather data and process them before sending packets to the BS.All of these mechanisms consume power which contradicts with main feature of WSNs which have limited power resources and lossy networks [1].Usually, wireless sensor nodes communicate with each other using multi-hop transmission over radio communication channels in order to send their packets from a source node to a destination [2].Thus, the sensor nodes contain transceiver module, processing module and power module [2].
Reliability, real-time performance of WSNs, safety/accuracy of transmitted data and the network lifetime are the major challenges in WSNs [2].The reliability in WSNs is the measure of the number of correct packets which are arrived correctly to the destination (i.e. the BS).Authors in [3] analyzed the reliability of WSNs over different topologies such as clustering the sensor nodes into groups.
They also studied the strategies that affect reliable packets transmission such as multiple sink nodes in the sensing filed and single/multipath transmission.
For the real-time performance in WSNs, authors in [4] proposed a framework which enhances the Quality-of-Experience (QoE) for Wireless Multimedia Sensor Networks (WMSNs).The main objective of the proposed framework is to increase the network throughput and avoid dropping packets when transferred to a destination.For the safety and accuracy of data in WSNs, many applications such as in military require data to be accurate in order to avoid unexpected circumstances and damages in a battle filed as an example.Two machine learning approaches for fault detection of transmitted packets are proposed in [5] for the purpose of increasing accuracy of energy management applications.For the network lifetime in WSNs, it is known that WSNs have limited power resources with low processing capabilities, and the battery lifetime of a sensor node should be extended [1]. Figure 1 summarizes some WSNs challenges.
WSNs have been utilized in many applications such as Military missions, Agricultural missions, Industrial WSNs, smart houses, home appliances, healthcare monitoring, disaster monitoring, traffic and transportation monitoring and the environmental sensing [2].
For WSNs in military applications, authors in [6] proposed a secure Journal of Computer and Communications framework for military applications based on WSNs.This framework has different modules such as cryptographic algorithm, operation modes and MAC protocols.The advantages of the proposed algorithm are to increase messages integration and data confidentiality.Authors in [7] proposed WSNs architecture for tactical requirements of a remote and large scale sensing field for military application.They divided the sensing filed into clusters and proposed a Cluster Head (CH) algorithm for a node selection to be the master for transmission.Scalability, real-time communication and affordability of WSNs in military application are investigated in [8].
For WSNs in agricultural missions, authors in [9] proposed a low cost turbidity sensor design for the purpose of monitoring water quality online.The idea is based designing sensor nodes which utilize orthogonal scattered light and transmitted light detection principle.The proposed sensor design shows accurate readings in comparison with the commercial sensor nodes.Authors in [10] employed WSNs as data collection tool and decision system which help farmers for their manual or automated irrigation activities.
For WSNs in industrial applications, authors in [11] proposed a system model which measures the high temperature based on WSNs in an industrial environment.Semi-passive chip is designed and vacuum packaging is described for the purpose of saving electronics against high temperature which cause damages.In [12], MAC protocol for time critical industrial WSNs is proposed in order to ensure delivery of packets before the deadline bounds (i.e.deterministic channel access).The proposed MAC protocol shows better performance than TDMA based WSNs.Author in [13] proposed a routing algorithm in industrial WSNs to ensure Quality of Services (QoS) for high and low reliability packets delivery.
The proposed algorithm achieves a reasonable traffic balancing which prevents network fragmentations.
For smart homes based WSNs and their related appliances, quality of life for elderly people in smart houses can be improved using a proposed system that monitors electricity and different house devices.The system allows smart and remote access to these devices which can be turned on using Android cell phones [14].Authors in [15] measured the amount of energy consumption at a smart home using ZigBee WSNs.They proposed an interface which can shows the value of consumed energy either numerically or graphically for automated houses.A combination between WSNs and wireless smart phone is used to prevent accidents of disabled people at smart houses [16].Sensor nodes are distributed in different rooms and detect potential hazards.After that, sensors send notifications to smart cell phones in order to take actions.
In terms of healthcare monitoring systems based WSNs, authors in [17] proposed a positioning system (i.e.WSN4QoL) for energy efficient monitoring of patients in indoor homes.In that system, patients can be located in indoor environment with high level of accuracy without extra hardware requirements.In Body Area Networks (BANs), using smart phones to collect packets and working as a sink node to send them to the back-end servers is proposed in [18].
For traffic monitoring based on WSNs, authors in [19] uses the graph theory for intelligent traffic monitoring in smart cities. Sensor nodes are distributed around traffic signals, reads and parking.Authors in [20] proposed data centric routing technique for packets delivery for the purpose of traffic monitoring in small scale urban areas.
The objective of this paper is to show how our proposed routing technique achieves high energy efficiency in WSNs in comparison with traditional routing techniques.The proposed routing technique has judicious selection of scenarios and parameters which make WSNs design more flexible and adaptable.In particular, the main contributions of this paper are twofold: 1) Designing a WSN based on multi-hop routing to achieve high level of scalability network.
2) Proposing a new network transmission scenario which includes fixed and mobile nodes for the purpose of fast packets delivery which can be applicable for emergency situations.The Proposing technique has a new data transmission model which ensures that sensor nodes in the same cluster don't send the same packets which leads to further power saving and extending the battery lifetime of sensor nodes.
The rest of the paper is organized as follows.Section 2 discusses some related works.Section 3 discusses the network transmission scenario of the proposed EERSM technique.Our proposed EERSM algorithm is explained deeply in Section 4. The proposed EERSM algorithm examples are discussed in Section 5.The simulation results for different performance parameters (i.e. the average residual energy, the number of sent packets to the BS, and the number of alive sensor nodes) are explained in Section 6.Finally, the conclusion is in Section 7.
Related Works
In this section we discuss some routing protocols which have been proposed in literature for both fixed and mobile sensor nodes.
Routing Protocols in WSNs with Fixed Nodes
For routing in WSNs where sensor nodes are fixed in their locations, many efforts have been focused on how to propose routing protocols that meet WSNs requirements to extend battery lifetime.For example, authors in [21] proposed energy efficient routing protocol based on random projection technique.Nodes are located using polar coordinates and routs for packets delivery are determined before sending packets which are compressed to eliminate the duplication of packets delivery with same contents.Authors in [22], proposed WSNs routing protocol which routs packets between CHs after forming the sensing field into clusters with different sizes.The shortest path and the residual energy are the main factors which are proposed to achieve fast and low energy consumption when sending packets to the BS.Authors in [23] proposed routing algorithm which is based on multiple factor for energy efficient WSNs.These factors include residual energy, end to end delay, the link quality and the shortest distance to the BS.The proposed algorithm shows high packets delivery ratio on a large scale sensing area.Greedy routing in WSNs is proposed in [24].Authors analyzed the ratio of successful packets delivery to the BS when the sensing filed has some obstacles such as a lake.Also, authors studied the affect of packets congestion and collision on real world deliverability of data in their experiment.In order to solve the problem of packets collision and congestion in WSNs, authors in [25] proposed a game theoretical framework which focused on selecting the next hop in the routing path.In [26], a multipath opportunistic routing protocol is proposed for the purpose of solving the problem of multipath transmission where a node may die due to multiple transceivers processing when forming paths to a destination.The reliability of a single node needs to be increased in order to increase the reliability of the routing path, so the networks may ensure the scalability with less power consumption of sensor nodes.Decreasing end to end delay based on a routing protocol with minimum number of nodes when forming a path from a source node to a destination is proposed in [27].When a node generates its packet, it will be forwarded to the next node which may have some packets in a queue which need to be forwarded.The proposed protocol ensures that when packets arrive to a node, it will find another path to the BS when the waiting time in the queue is too long.Combining between the requirements of MAC and routing layers when forming a path between a source node and a destination in WSNs is a protocol which is explained in [28].Authors proposed two MAC algorithms in order to evaluate the channel quality.
For the routing layer, authors proposed a routing algorithm which aims to form multiple paths between two nodes taking into consideration the shortest path and the link quality which is measured by the proposed MAC algorithms.How to construct a cluster and form a routing tree of sensor nodes in WSNs is the technique which is proposed in [29].The proposed technique evaluates different scenarios such as distributing sensor nodes randomly or uniformly with multiple numbers of sensor nodes (i.e.200, 400 and etc.).
Routing Protocols in WSNs with Fixed Nodes
In this subsection we discuss the routing techniques based on mobile topology of nodes which are distributed in a sensing filed.In literature, many researchers have been focused on mobile nodes and their abilities to extend the lifetime of sensor nodes.
Authors in [30] redefined the lifetime of WSNs as the time for the depleted energy of mobile source nodes.Authors assumed that mobile nodes are working as relay nodes which generate packets and transfer them to the next node or to the fixed BS.All nodes send their packets only in their transmission range which reduce the power needed to send, receive and sense packets.In [31], routing in low power and lossy network such as WSNs based on mobile nodes in industrial applications is discussed extensively.The authors proposed a position determination technique for certain mobile nodes which are working beside fixed nodes in order to increase the reliability of transferring packets to the BS.Since the proposed position based technique for the mobile nodes focus on location of each sensor, authors compare their performance results with well-known geographical routing techniques.Forming clusters for mobile nodes in WSNs and proposing cluster based routing technique is discussed in [32].The technique is based on adaptive TDMA (i.e.Time Division Multiple Access).Each mobile sensor node sends their packets during its assigned timeslot.Not only the fixed node inside a cluster has the ability to send its packet but all mobile nodes which enter the cluster can send its packet as long as there is available timeslot in TDMA scheduling mechanism.The proposed technique performance is compared with well-know mobile LEACH protocol (i.e.Low Energy Adaptive Clustering Hierarchy), and the packets loss has been reduced by around 25%. Creation of active area between a source and a destination for routing packets of mobile nodes is discussed in [33].Mobile nodes are tuned between sleep and active modes in order to consume low energy.Many factors have been studied to determine the direction of mobile nodes, their speed and residual energy in order to provide maximum connectivity.
Network Transmission Model
In this section, we explain the network transmission model for our proposed EERSM technique.We design a WSN which consists of many sensor nodes which are distributed randomly in a sensing filed.Sensor nodes consist of stationary which are fixed in their locations and mobile nodes which are located in the intersection areas between clusters.Figure 2 illustrates the network model.
The network transmission mode includes the following assumptions: The sensing field is divided into intersected clusters.
The nodes are distributed randomly in the sensing filed which includes stationary nodes and mobile nodes located in the intersected areas. Every cluster has two or three intersected areas.Every intersected area has one gateway node and pre-assigned mobile node as shown in Figure 2. The transmission like between any two sensor nodes is bidirectional.
Every node has the ability to determine the distance to the next hop forwarding and hence control the transmission power. All sensor nodes have the same initial energy and the BS has ultimate power.
All sensor node consume power for transmitting, receiving and relaying packets [34] (i.e.Total energy consumed for each sensor node ), where T E is energy consumed due to transmit- ting l bits of a packet, which is equal to the total of electrical circuit energy consumption ( cir E ) plus amplifier energy consumption ( amp E ).R E is en- ergy consumed due to receiving l bits of a packet, which is equal to the total of electrical circuit energy consumption ( cir E ) plus data aggregation energy consumption ( AG E ). All sensor nodes stay in sleeping mode when they don't have packets to send, and turn on the transceiver (active mode) only when they have packets to receiver or send.Hence leading to further power saving.
The First order radio model is assumed for packets transmission [35].This radio model is based on the distance (d) between a source node (e.g.node X) and a destination (e.g.node Y).If d between X and Y is less than a threshold level ( thr d ), then the free space propagation model is used).If d between X and Y is greater than a threshold level ( thr d ), then the multipath propagation model is used.
Proposed EERSM Algorithm
In the proposed EERSM algorithm sensor nodes are distributed randomly in sensing area (i.e. a harsh environment).This sensing area (i.e.G = {x, y}; where x and y represent the dimensions of the sensing area) include sensor nodes which perform monitoring or detecting physical phenomena.Theses sensor nodes are divided into a large number of stationary nodes and less number of mobile nodes.For better energy efficiency both kinds of sensor nodes are deigned to be in sleeping phase all the time unless they have packets to send, receiver or relay which shift the transceiver of nodes to the active mode.Mobile nodes move in two directions at angle 0 ϕ = (or straight ahead) and 90 ° at velocity v m (more details are explained in the following subsections).The BS is located in stationary location (i.e.BS = (x BS , y BS )).In addition, gateway nodes (g k ); where k is the gateway node number), for data aggregations are located in ( ) The proposed EERSM has two phases (i.e.initialization and data transmission) which are discussed in the following subsections.
Initialization Phase
At the first stage of the proposed EERSM technique, the sensing field which includes stationary and mobile nodes is divided into equal size clusters (i.e.equal size circles).And the distance between the centres of two neighbouring clusters ( cir l ) is the same for all clusters.Hence, the intersected areas between circles or clusters have similar sizes which can be calculated as explained in [36].If the distance between the centres of two neighbouring circles 0 cir l = , then the area of intersected sector is πr 2 (i.e. the two neighbouring clusters are exactly overlapped); where r is the radii of a cluster.If the distance between the centres of two neighbouring circles cir l is greater than the double of the radius (or 2r) for one of the similar size circles, then the two neighbouring circles don't overlapped and the area of the intersected sector is 0. As shown in Figure 3, the mobile nodes are located in the intersected areas (sectors) between clusters.First we need to figure out the area of sectors (let's denote it Ʌ), and then we need to calculate the area of the tiny part of that sectors (let's denote it ∂).This can be done by subtracting the area of corresponding triangle (let's denote it E ) as follows: In our works it is very important to determine the area of sectors in order to check the current locations of the mobile nodes which are moving as explained at angles 0 ϕ = or 90˚.The area of a sector (Ʌ) is equal to (φ/2)r 2 ; where φ is the angle of the sector [36].In Figure 3, it is shown that the height of the triangle is equal to 2 cir l .Now, let's divide the triangle into two equal sizes triangles as shown in Figure 4. Then φ becomes φ/2.Using trigonometry as follows: ( ) ( ) In order to figure out φ, Equation ( 1) can be rewritten as follows: Now, we need to find out the area of the tiny part of the sectors.The area of E in Figure 4 is equal to ( )( )( ) ; where BC r is the length of the line between angle B and C in Figure 4. Using Pythagorean theorem [36] to calculate BC r , we can find the base of the half of the triangle (let's denote it 2 BC r ) as follows: Now, let's rewrite (5) to find out Base l as follows: ( ) ( ) 4 The area of the tiny part of the sector E is equal to ( )( )( ) Finally, let's call Equation (1) to calculate the intersected area between clusters as follows: In our proposed EERSM, the BS starts sending broadcast messages to all sensor nodes which are distributed randomly in the sensing filed.These broadcast messages reach to all mobile and stationary nodes which respond by sending their locations, unique ID numbers, and the initial energy levels.The BS uses the information to determine the distance between any node and the BS or the distance between any node and the fixed gateway nodes inside intersected areas.
Also, the ID numbers are used to identify the mobile or stationary nodes in order to rout packets through clusters and their related intersected areas (i.e.we call them sectors).
In our network scenario the sensing area is divided into six clusters (i.e. ).This can be expressed as
Data Transmission Phase
In EERSM, once the BS receives all messages from all nodes in the sensing field which specify the initial energy level, the locations of nodes and the related ID numbers, it starts receiving data packets from sensor nodes.All nodes which are located in the same cluster will sense the same physical phenomena.At the first round, the well known Dijkstra algorithm with multi-hop transmission scenario estimates the rout to the closet gateway node which is fixed in its location and hence to the BS.The BS determines the rout and sends it to all related nodes.The related nodes construct their routing tables and update them for every packet transmission.Now, data transmission will start and nodes send their packets to the closest gateway node.In order to save energy, the gateway node will collect (aggregate) similar packets and pick only one of them while other packets will be dropped.In EERSM, the gateway node sends its packet to its related mobile node which is located in its intersected area.If the mobile node is located in the related intersected area, then the gateway node will send the packet to that mobile node to carry the packets to the next gateway node which is closest to the BS.The mobile node moves in two directions which are the straight line ( 0 ϕ = ) or with angle 90 ϕ = in order to reach to the next inter- sected area.The gateway node sends its related packet to the next fixed sensor node in neighbouring cluster if the mobile node is not located in the intersected area after waiting a pre-defined time threshold level (i.e.t thr ).The fixed sensor node relays the packet to the closest gateway node to the BS.When the packet reaches to the closest gateway node to the BS, it will send that packet to the BS and another round will start.All data packets contain the energy levels which are extracted by the BS which notifies the sensor nodes in order to update their routing tables.The data transmission phase will be repeated periodically for all rounds.
The flowchart of EERSM is demonstrated in Figure 5.
Proposed EERSM Algorithm Examples
In this section we discuss various examples for the proposed EERSM algorithm for further clarification.
Example one is illustrated in Figure 6.In this figure, two sensor nodes in cluster 1(i.e.C 1 ) sense the same physical phenomena.They turn on their transceiver in order to send their packets to the closest gateway node g 1 .They always follow multi-hop transmission scenario and Dijkstra algorithm to find the closest gateway node (i.e.g 1 ).The gateway node g 1 collects these packets and selects only one of them for further transmission and drops the other packet.The gateway node g 1 uses the positioning system (i.e.GPS) to determine the location of the mobile node m 1 which is assigned for the corresponding intersected area (i.e. S 1 ).The gateway node found that the mobile node m 1 is located in the intersected area S 1 .So the packet will be transmitted to the mobile node which is responsible for carrying that packet to the closest (next) intersected area (i.e. S 2 ) to the BS.The movement of the mobile node follows the straight line direction for a distance equal to the diameter of one cluster (circle).When the mobile node arrives to the next intersected area (i.e. S 2 ), it will send the packet to the corresponding gateway node g 2 for that sector (i.e. S 2 ).Now, the gateway node g 2 also finds that the mobile node m 2 is located in S 2 , so the packet will be transmitted to m 2 which is responsible for carrying the packet to S 3 since it is the closest intersected area to the BS.Here, the movement of the mobile node follows angle 90 ϕ = .Now, m 2 will send the packet to g 3 which delivers the packet to the BS.Example two is illustrated in Figure 7.One sensor node in cluster 6 (i.e.C 6 ) sends its packet after detecting physical phenomena which needs to be transmitted.Multi-hop transmission scenario with Dijkstra algorithm is used in order to carry the packet to the closest gateway node to the BS (i.e.g 6 ).The packet is arrived to g 6 which tries to send the packet to m 6 .However, g 6 waits a pre-defined time threshold level t thr since the mobile node is not located in S 6 .Now, the g 6 will send the packet using multi-hop transmission with Dijkstra algorithm to the closest gateway node to the BS (i.e.g 7 ).The packet will be relayed through the Figure 6.Example One for EERSM algorithm.sensor node in cluster 5 (C 5 ).g 7 receives the transmitted packet and check the GPS to find out the location of m 7 which is located inside the intersected area S 7 .The packet will be transmitted to m 7 which moves at angle 90 ϕ = in order to carry the packet to g 3 .Since g 3 is the closest gateway node to the BS, it will send the packet to its final destination.
The proposed EERSM algorithm is given below.We explain some notations, the initial phase and the data transmission phase.
Energy Efficient Routing with Stationary and Mobile nodes (EERSM) Algorithm Notations: -N i , m j , g k , C c , S s : stationary nodes, mobile nodes, gateway nodes, clusters and intersected areas, respectively.
-i, j, k, c, s: the number of stationary nodes, mobile nodes, gateway nodes, clusters and intersected areas, respectively.
-G = {x, y}: the sensing area -x, y: the dimensions of the sensing area.cir l : the distance between the centres of two clusters.
thr t : a pre-defined time threshold level.
-r: the radii of a cluster.
-τ : the waiting time to detect mobile node arrival.
Initialization Phase: 1) Deploy all stationary i N , mobile j m and gateway k g nodes in the sensing field G = {x, y}.
2) The BS send broadcast messages to all nodes in G = {x, y}.
3) All i N , j m and k g respond by sending the locations (i.e. for each s S . 5) All gateway nodes k g send their locations to i N in its cluster.
6) Calculate the areas of the sectors ∂ (intersected areas) between clusters to be equal If N i = £; detect activity 1 mode Ni = ; turn on the transceiver for transmission.
Call Dijkstra algorithm ( ) to nearest ( ) , , ; m 2 or m 7 arrive to S 2 or S 7 m 2 or m 7 move to S 3 ; mobile node move to intersected areas S 3 .packet = 1; send packet to g 3 ets thought many sensor nodes in multi-hop forwarding.It is followed by CBR Mobile-WSN technique which is closed to the number of hops utilized in EERSM technique, while routing with blacklisting technique uses many sensor nodes in multi-hop forwarding when sending their packets to the BS. Figure 8 shows this performance metric the number of hops needed for the three techniques over different number of nodes in the simulation.
The second performance metric is the average residual energy.The forwarding phase in routing with blacklisting technique needs every node to search for their nearest node in order to turn on the transceiver and send the packets to a centralized sink node.In addition; mobile nodes move randomly around the sink node in order to carry packets.However, this scenario needs many control messages to be sent between the sink node and the mobile nodes in order to detect the right mobile node to carry the packets.This makes the energy consumption increase.Also, sensor nodes in CBR Mobile-WSN technique need to search for a free cluster head in order to avoid packets loss which increases the energy consumption.In CBR Mobile-WSN technique many control messages need to be sent in order to give a permission for a mobile node to enter a cluster which consume more power, while in the proposed EERSM technique the mobile nodes move from one intersected area to another without the necessity to send control messages.Mobile nodes deliver their packets to the right position (i.e. the right intersected area) and then return to their pre-assigned positions in order to carry more packets.Figure 9 shows the residual energy for the three techniques verses the simulation rounds.The third performance metric is the throughput or the number of packets which are arrived successfully to the BS.Due to the fact that our proposed EERSM has longer lifetime as we will discuss in the following performance metric, nodes will still send to the BS because they are still alive for longer period of time.
Thus, the proposed EERSM has a higher curve in Figure 10, followed by CBR Mobile-WSN technique.
Finally, the fourth performance metric is the network lifetime or the time interval from the first node sending their packets until the last node becomes dead.
In Figure 11
Conclusion
The proposed EERSM routing technique combines the advantages of relaying packets through stationary sensor nodes and pre-assigned locations of mobile nodes.Less number of packets will be relayed using the stationary sensor nodes, and the mobile nodes don't consume too much power to for control messages.
Multi-hop forwarding is used in EERSM in order to accelerate packets delivery and clusters are formed not to nominate cluster head and consume power for nomination procedures, but it is utilized in order to determine the location of mobile nodes in the intersected areas between clusters and hence to use less number of hops to packets transmission.The number of packets arrived successfully to the BS, the residual energy and the network lifetime are performance metrics which are evaluated for the proposed EERSM and compared with similar power efficient routing for mobile nodes in WSNs.From the simulation results, the proposed EERSM can be considered as energy efficient technique in multi-hop transmission scenario for stationary/mobile nodes of WSNs.
)
which contain Journal of Computer and Communications seven mobile nodes (i.e.
location of the mobile node.
energy int E and the node number (i.e.i, j, k, respectively) Journal of Computer and Communications 4) Forming clusters c C with equal intersected areas s S contain one k g stationary nodes in sleeping mode.
Figure 8 .
Figure 8. Number of hops counted when sending packets to the BS.
to BS vs. Number of Sensor Nodes in the Simulation Number of Sensor Nodes Number of Hops to the BS Proposed EERSM Technique CBR Mobile WSN Technique Routing with Blacklisting Technique Journal of Computer and Communications , the stability period of proposed EERSM exceeds routing with blacklisting technique and CBR Mobile-WSN technique by 400 and 200 rounds respectively.The first node become dead in blacklisting technique routing at round 410, while The first node become dead in CBR Mobile-WSN technique at round 588.This is expected results due to the fact that less number of hops involved when sending packets to The BS.Most stationary sensor nodes in the proposed EERSM technique are staying in a sleep mode, and only turn on the transceivers when having packets need to be relayed through them.The majority of transmission depends on the mobile nodes which are moving in their pre-determined directions.
Figure 9 .
Figure 9.The residual energy verses the simulation rounds.
Figure 10 .
Figure 10.The throughput verses the simulation rounds.
Figure 11 .
Figure 11.The lifetime of the networks verses the simulation rounds. | 7,602 | 2019-04-04T00:00:00.000 | [
"Computer Science"
] |
A high-performance computational workflow to accelerate GATK SNP detection across a 25-genome dataset
Background Single-nucleotide polymorphisms (SNPs) are the most widely used form of molecular genetic variation studies. As reference genomes and resequencing data sets expand exponentially, tools must be in place to call SNPs at a similar pace. The genome analysis toolkit (GATK) is one of the most widely used SNP calling software tools publicly available, but unfortunately, high-performance computing versions of this tool have yet to become widely available and affordable. Results Here we report an open-source high-performance computing genome variant calling workflow (HPC-GVCW) for GATK that can run on multiple computing platforms from supercomputers to desktop machines. We benchmarked HPC-GVCW on multiple crop species for performance and accuracy with comparable results with previously published reports (using GATK alone). Finally, we used HPC-GVCW in production mode to call SNPs on a “subpopulation aware” 16-genome rice reference panel with ~ 3000 resequenced rice accessions. The entire process took ~ 16 weeks and resulted in the identification of an average of 27.3 M SNPs/genome and the discovery of ~ 2.3 million novel SNPs that were not present in the flagship reference genome for rice (i.e., IRGSP RefSeq). Conclusions This study developed an open-source pipeline (HPC-GVCW) to run GATK on HPC platforms, which significantly improved the speed at which SNPs can be called. The workflow is widely applicable as demonstrated successfully for four major crop species with genomes ranging in size from 400 Mb to 2.4 Gb. Using HPC-GVCW in production mode to call SNPs on a 25 multi-crop-reference genome data set produced over 1.1 billion SNPs that were publicly released for functional and breeding studies. For rice, many novel SNPs were identified and were found to reside within genes and open chromatin regions that are predicted to have functional consequences. Combined, our results demonstrate the usefulness of combining a high-performance SNP calling architecture solution with a subpopulation-aware reference genome panel for rapid SNP discovery and public deployment. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-024-01820-5.
Background
Single-nucleotide polymorphisms (SNPs) are one of the most common types of genetic variation (e.g., SNPs, insertions, deletions, copy number variations, and inversions) used to study genetic diversity among living organisms [1,2], and are routinely detected by mapping resequencing data to reference genomes using various software tools [3][4][5].In major crops, SNPs are routinely discovered using genome resequencing or array-based hybridization methods on thousands of accessions as documented for rice [6,7], maize [8], soybean [9], and sorghum [10].In order for such data to be used more widely for trait discovery, genomic selection, and functional genomics applications, numerous databases have been developed for crop plants such as, e.g., SNP-Seek [11], ViceVarMap [12], MaizeSNPDB [13], and Rice-Navi [14].Unfortunately, as crop communities continue to improve their flagship genome assemblies, as well as produce multiple new assemblies that take into account population structure [15][16][17], and other factors, it is becoming more onerous for such databases to keep pace with the onslaught of new data coming online.
The Genome Analysis Toolkit (GATK) [18,19], one of the most popular software tools developed for SNP identification, has been widely used for SNP detection for many species [9,20], and was recently modified to identify copy number variants (CNVs) in human [21].Although vast amounts of resequencing data have been processed using GATK [6,9,[20][21][22], the processing speed of the publicly available open-source version(s) can be very time-consuming when very large resequencing data sets are involved.For example, it took our consortia almost 6 months to call SNPs with GATK using ~ 3000 resequenced rice accessions mapped to a single reference genome.Although several commercially and publicly available workflows (e.g., Sentieon [23], Clara Parabricks [24], Falcon [25], DRAGEN-GATK [26]) are now available that accelerate GATK processing times, all require special and expensive hardware (e.g., graphics processing units, GPUs; field-programmable gate arrays, FPGAs) and are normally not suitable for processing large population datasets.
To address the need to detect genetic variation on the almost daily release of high-quality genome assemblies we have identified three challenges that must be solved to meet the demand for speed and efficiency of SNP detection.First, the exponential increase in sequencing and resequencing data requires intelligent data management solutions [23][24][25] and compressed data formats to reduce storage [26,27]; second, data analysis needs flexible workflows and monitoring tools for highthroughput detection and debugging [28]; and third, modern high-performance computing (HPC) architectures are needed to complete jobs efficiently [29,30].
To address these challenges, we designed a flexible workflow and employed high-performance computing (HPC) architectures to develop an open-source genome variant calling workflow for GATK (i.e., HPC-GVCW).The workflow was divided into four phases that include a data parallelization algorithm -"Genome Index splitter" (GIS) [31] -that divides genomes into megabase (MB) size chunks for parallel GATK processing and file merging.By dividing genomes into 45 Mb, 10 Mb, and 5 Mb chunks, we found that the smallest chunk size tested gave the optimal performance.Using HPC-GVCW with a chunk size of 5 Mb enabled us to call SNPs from ~ 3000 resequenced rice accessions (with 17 × genome coverage) on a single rice genome (GS ~ 400 Mb) in 120 h, which is almost ~ 36 times faster than previously reported (~ 6 months).
To demonstrate utility, we ran HPC-GVCW on a 25 crop genomes dataset using publicly available resequencing data sets and the most up-to-date (near) gap-free reference genome releases available and called an average of 27.3 M, 32.6 M, 169.9 M, and 16.2 M SNPs for rice (GS ~ 400 Mb), sorghum (GS ~ 700 Mb), maize (GS ~ 2400 Mb), and soybean (GS ~ 1100 Mb), respectively.
To demonstrate the novelty of the genetic variation discovered, our analysis of SNP datasets from a 16-genome "subpopulation-aware" rice reference panel revealed a total of ~ 2.3 M (8.8%) novel SNPs in total that have yet to be publicly released based.Analysis of these novel SNPs identified 1.3 M SNPs in genes, 20% (i.e., 248,403) of which are predicted to have impacts on gene function.Analysis of open chromatin regions (OCRs) of one accession (i.e., Zhenshan 97) revealed the presence of 7441 novel SNP that may have effects on gene regulation.Finally, in a test case to evaluate the allele status of known agriculturally important genes, we identified 180 accessions that contain the submergence tolerant allele in the Sub1A gene that could be integrated into accelerated breeding programs.
HPC-GVCW development
HPC-GVCW was designed into four phases: (1) mapping, (2) variant calling, (3) call set refinement and consolidation, and (4) variant merging (Fig. 1).Briefly, Phase 1 was designed to map clean resequencing reads to a reference genome.Phase 2 was designed to call variants using GATK for each sample.Phase 3 was designed to merge all variants per sample into a non-redundant joint genotype file by genome-wide intervals (also called "chunks").Phase 4 was designed to generate a genomewide joint genotype by assembling all variant intervals (detailed in Additional file 1: Automated Genome Variant Calling Workflow Design [32][33][34][35]; Additional file 2: Fig. S1; Additional file 3: Table S1).The GVCW workflow was designed to run on high-performance computers (Fig. 1a); however, it can also be employed on alternative computational platforms, including hybrid clusters and high-end workstations (Fig. 1b).Of note, each of the four phases is independent of one another, flexible, and scalable across multiple nodes and platforms (Additional file 1: Workflow flexibility).
With this workflow, the most challenging component to address was the merging of large sample sets (e.g., 3000 rice accessions) into a joint file using GATK with a single node, i.e., Phase 3. To address this challenge, we modified the "genome intervals joint genotype" module supported by GATK ("CombineGVCFs" and "GenotypeGVCFs, " detailed in Additional file 1: Automated Genome Variant Calling Workflow Design) by adding an algorithm called "Genome Index Splitter" (GIS) [31] that can optimize the size and number of genomics intervals utilized.The GIS algorithm creates a "chromosome split table" (CST) to index disjoint variant intervals, which can be fine-tuned based on genome size and available "central processing units" (CPUs) (Additional file 2: Fig. S1c-d).Optimal chunks are calculated based on three steps: (1) locate the largest chromosome length in a given reference genome; (2) calculate the fairness of a divisible integer for a given maximum number of cores; and (3) whole genome reference sequences are divided by the optimal integer number, as illustrated in Additional file 2: Fig. S1e.
For example, the CST with the entries as follows: < chromosome name (ChrName), chunk number (Chunk_no), chromosome starting position (Start), chromosome end position (End) > (ChrName, Chunk_no, Start, End).Once chunk size is optimized, jobs (both GATK's "CombineGVCFs" and "GenotypeGVCFs" functions) can be distributed and parallelized by chunks (Additional file 2: Fig. S1f-g).Leveraging this algorithm ensures that the creation of disjoint variant intervals is optimized based on genome size and computational resources, thereby preventing the underutilization of resources and the reduction of execution times.
HPC-GVCW benchmarking
To evaluate the precision of SNP identification of GVCW, we initially assessed the workflow across three computational platforms -i.e., supercomputer, clusters, and high-end workstations, using a subset of The 3000 Rice Genome Project (3 K-RGP) dataset [6] (n = 30) mapped to The International Rice Genome Sequencing Project (IRGSP) Reference Sequence (RefSeq) [36].We observed a 93.8-94.3%identical call rate across the three platforms and a 83-94% identical call rate when compared with previously published results [37] (Additional file 2: Fig. 2a).
We further compared the efficiency of total CPU hours (i.e., the execution time if all jobs were operated between the standard and genome chunk strategies) between GATK and HPC-GVCW.For GATK, a total of 304 CPU hours was required (9.5 h × 1 node × 32 cores/node) (Fig. 2b), vs. HPC-GVCW which ranged from 63 (chunk size = 5 Mb, nodes = 8) to 2511 (chunk size = 10 Kb, nodes = 2342) hours when using different chunk size/ node combinations (Fig. 2b).This equates to a maximum of 4.8 times more efficient, to 8 times less efficient as Fig. 1 Automated and flexible genome variant calling workflow (GVCW) design for a HPC systems and b diversified system architectures compared to the standard GATK approach, respectively.Of note, we found that the number of CPU hours either increased or decreased at chunk sizes greater or less than 5 Mb when using HPC-GVCW, which is recommended when using the workflow.
Overall, our results reveal that execution time can be reduced by a maximum of 283 times when the smallest genome interval is set to 10 Kb/chunk, and CPU efficiency could be improved 4.8 times using a genome interval set to 5 Mb/chunk for HPC-CVCW, as compared with GATK.
HPC-GVCW benchmarking for multiple crop species
To test if HPC-GVCW could be widely used across multiple crop species, we re-called SNPs using previously published resequencing/reference genome data sets for rice, sorghum, maize, and soybean (Additional file 3: Table S1 and Availability of data and materials for details).Using KAUST's Shaheen 2 supercomputer with 30 K cores, processing 3,024 resequenced samples (3 K-RGP) mapped to a single rice reference genome took 94 h (i.e., 3.91 days) (Additional file 3: Table S2).For the sorghum, maize, and soybean data sets, due to the small number of samples, we only benchmarked HPC-GVCW on a hybrid cluster with 3000 cores and found that even for a 2.4 Gb maize genome [38], SNP calling for 282 samples could be completed within ten days (Additional file 3: Table S2).Our benchmarking test identified 26.5 M, 32.7 M, 167.6 M, and 15.9 M SNPs for rice (IRGSP-1.0),sorghum (BTx623), maize (B73 v4), and soybean (Gmax 275 v2.0), respectively (Table 1 and Additional file 3: Table S3).To assess the accuracy of the SNP calls produced through HPC-GVCW compared with previous reports, we found that 86.3% of the rice (22.8 M) and 89.3% of the sorghum (29.2 M) SNPs were identical (Additional file 2: Fig. S2c-d and Additional file 3: Table S3).For maize, only 25% of the SNP calls overlapped which was likely due to the software and strategy used for SNP calling and filtering [39].For soybean, a direct comparison was not possible due to lack of data availability.
HPC-GVCW at production scale -a 25-genome SNP dataset for multiple crop species
Since the majority of publicly available SNP data for major crop species have yet to be updated on the recent wave of ultra-high-quality reference genomes coming online, we applied HPC-GVCW to call SNPs, with the identical large resequencing datasets, on the most current and publicly available genome releases for rice (i.e., the 16 genome Rice Population Reference Panel) [15,40,41], maize (B73 v4, B73 v5, and Mo17v2) [16,42], sorghum (Tx2783, Tx436, and TX430) [43], and soybean (Wm82 and JD17) [44].
Novel SNPs in rice
Having the ability to map large-scale resequencing datasets rapidly (e.g., 3 K-RGP) to multiple genomes (e.g., the 16-genome RPRP dataset), HPC-GVCW opens the possibility to discover and rigorously interrogate population-level pan-genome datasets on multiple scales -i.e., pan-genome, genome and single gene scale.
Pan-genome scale
Our analysis of the 3 K-RGP dataset [6] S3, and Additional file 3: Table S5).We found that an average of 36.5 Mb of genomic sequence is absent in a single rice genome but is present in at least one of the other 15 RPRP data sets, which is equivalent to ~ 2.1 M SNPs (Fig. 3, Additional file 2: Fig. S3, and Additional file 3: Table S5).For example, when considering the flagship reference genome for rice, i.e., the IRGSP RefSeq [36], a total of ~ 36.6 Mb of genomic sequence is completely absent in the IRGSP RefSeq but is found spread across at least one of the 15 genomes (~ 2.43 Mb/genome), and includes ~ 2.3 M previously unidentified novel SNPs (Fig. 3, Additional file 3: Table S5).
To validate these potentially functional SNPs, we measured the frequency of all 248,403 SNPs across the 3 K-RGP data set as shown in Additional file 2: Fig. S5a.The results show that 76.31% (189,564) of these putative functional SNPs could be identified within three or more rice accessions, thereby confirming the presence and quality of these SNP variants.These results show that much of the collective rice pan-genomes remain to be explored for crop improvement and basic research.
Genome scale -Zhenshan 97 (ZS97)
Open chromatin regions (OCRs) are special regions of the genome that can be accessed by DNA regulatory elements [47,48].Chromatin accessibility (CA) of OCRs can affect gene expression, epigenetic modifications, and patterns of meiotic recombination of tissue cells that could lead to important regulatory effects on biology observation [49,50].For rice, using the IRGSP Ref-Seq and the 3 K-RGP dataset, we previously annotated 5.06 M variants that were located in OCRs, of which ∼2.8% (~ 142,000) were classified as high-impact regulatory variants that may play regulatory roles across multiple tissues [51].
To search for novel SNPs in OCRs that are not present in the IRGSP RefSeq, we scanned for SNPs in OCRs of ZS97, a Xian/Indica variety, as a test case.First, our analysis revealed that approximately 14.6% of the ZS97 genome contains OCRs across the 6 tissues investigated, i.e., flag leaf, flower, lemma, panicle, root, and young leaf (Additional file 2: Fig. S6).We then conducted an intersection analysis of identified OCRs (peak regions) with variant call format (VCF) files and discovered 3,303,820 SNPs located within OCRs of the ZS97 genome (Additional file 3: Table S7), of which 7,441 were novel (i.e., relative to the IRGSP RefSeq).This equates to 6.23% of the 1.19 M ZS97 novel SNPs discussed above (Additional file 3: Table S7).
To validate these SNP, we again measured SNP frequency across 3 K-RGP for all 7441 novel SNPs and found that 78.13% (5814) of these SNPs could be identified in three or more accessions (Additional file 2: Fig. S5b).
To assess the potential functional impact of these novel ZS97 SNPs, we established thresholds by selecting the top and bottom five percent of variation scores from previously scored SNP variation data across as reported [51], which led to the identification of 855 SNPs (Additional file 3: Table S6).Notably, these SNPs accounted for approximately 33.3% of the loci with significant variations, which are considered large-effect SNP loci with a greater impact on chromatin accessibility (CA).These results indicate the effects of physical access to chromatinized DNA, to binding, allowing for active gene transcription for novel SNPs.
Single gene scale -Sub1A
Many of the genes and SNPs identified in our pangenome variant analysis have yet to be tested for their contributions to agronomic performance and biotic and abiotic stresses.For example, prolonged submergence during floods can cause significant constraints to rice production resulting in millions of dollars of lost farmer income [52].One solution to flooding survival has been to cross the Sub1A gene, first discovered in a tolerant indica derivative of the FR13A cultivar (IR40931-26) in 2006 [52], into mega rice varieties such as Swarna, Sambha Mahsuri, and IR64 [52,53].Our analysis of the Sub1A locus across the pan-genome of rice showed that this gene could only be observed in 4 out of 16 genomes in the RPRP data set, including IR64 (Fig. 3c, d).Since Sub1A is absent in the IRGSP RefSeq, the genetic diversity of this locus can only be revealed through the analyses of reference genomes that contain this gene.Thus, we applied the IR64 reference as the base genome for SNP comparisons, and identified a total of 26 SNPs in the Sub1A locus across 3 K-RGP, 6 of which have minor allele frequencies (MAF) greater than 1% (Fig. 1F), including a previously reported SNP (7,546,665-G/A), which is also validated by 4 gene sequences, i.e., OsIR64_09g0004230, OsLima_09g0004190, OsGoSa_09g0004200, and OsARC_09g0004070.This variation resulted in a nonconservative amino acid change from serine (S, Sub1A-1, tolerance-specific allele) to proline (P, Sub1A-2, intolerance-specific allele) [52] (Fig. 3e, f ).The majority of accessions in the 3 K-RGP data set (i.e., 2173) do not contain the Sub1A gene, while 848 do, 668 of which (22.11%) have the Sub1A-2 allele, while 180 accessions (5.96%) contain the Sub1A-1 allele (Fig. 3f ).Understanding the genetic diversity of the Sub-1A gene at the population level helps us understand and filter variants that are predicted to show flooding tolerance across Fig. 3 Rice Population Reference Panel (RPRP) [15] pan-genome variant analysis.a Circos plot depicts the distribution of genomic attributes along the IRGSP RefSeq (window size = 500 Kb).b Comparison of genomic attributes, i.e., genes, SNPs, Pi, and Theta on chromosome 9 across the 16 RPRP pan-genome data sets (window size = 10 Kb).c Rice Gene Index (RGI) comparison of the Sub loci across the 16 RPRP pan-genome data set.d Phylogenetic analysis of Sub1A, Sub1B, and Sub1C across the 16 RPRP pan-genome data set.e Amino acid alignment of the Sub1A gene across the RPRP.f Survey of SNPs within the Sub1A gene across the 3 K-RGP resequencing data set.This analysis revealed the genomic status of the Sub1A gene (presence/absence; submergence tolerance/intolerance) across the 3 K-RGP data set the 3 K-RGP, which could be further applied to precise molecular-assisted selection (MAS) breeding programs.In addition, such pan-genome analyses may also reveal new variants that could provide valuable insights into the molecular mechanisms of flooding tolerance.
Discussion
With the ability to produce ultra-high-quality reference genomes and population-level resequencing data -at will -accelerated and parallel data processing methods must be developed to efficiently call genetic variation at scale.We developed a publicly available open-source high-performance (CPU-based) computing pipeline (HPC-GVCW) that is supported across diversified computational platforms, i.e., desktops, workstations, clusters, and other high-performance computing architectures.In addition, HPC-GVCW was containerized for both Docker [54] and Singularity [55] for reproducible results without reinstallation and software version incompatibilities.
Comparison of SNP calls on identical data sets (i.e., rice 3 K-RGP to the IRGSP RefSeq and 400 samples from Sorghum Association Panel to the BT623v3.1)yielded similar results, however, run times could be reduced from more than six months to less than one week, as in the case for rice 3 K-RGP [6].The GVCW pipeline enabled the rapid identification of a large amount of genetic variation across multiple crops, including sorghum, maize, and soybean on the world's most up-to-date, high-quality reference genomes.These SNPs provide an updated resource of genetic diversity that can be utilized for both crop improvement and basic research, and are freely available through the SNP-Seek [56], Gramene web portals [57], and KAUST Research Repository (KRR [58]).
Key to our ability to rapidly call SNPs on a variety of computational architectures lies in the design of the HPC environment and the distribution of work across multiple nodes.Our next steps will be to apply GVCW on improved computing platforms, e.g., KAUST Shaheen III with unlimited storage and file numbers, 5000 nodes, faster input and output (I/O), and tests on larger forthcoming data sets [59].In addition to GATK, other SNP detection strategies such as the machine learning-based tool "DeepVariant" [3], which shows better performance in execution times with human data [5], have yet to be widely used in plants.With a preliminary analysis of the rice 3 K-RGP dataset, "DeepVariant" identified a larger number of variants at a similar or lower error rate compared to GATK [60].To test how artificial intelligence (AI) can be used to improve food security by accelerating the genetic improvement of major crop species, we plan to integrate "DeepVariant" into our HPC workflow to discover and explore new uncharacterized variation.
In addition, we also plan to apply similar pan-genome strategies on more species beyond rice, sorghum, maize, and soybean to discover and characterize hidden SNPs and diversity, which could provide robust and vital resources to facilitate future genetic studies and breeding programs.
Conclusions
We developed HPC-GVCW for variant calling in major crops, which can reduce execution times > 280 fold, as well as increase efficiency > 4.8 fold as compared with the GATK 'best practice' workflow [19].A new algorithm ("Genome Index splitter") for running 'CombineGVCFs' was designed to parallelize this step and was found to be 19 times faster than available default options.We demonstrated that the entire workflow can be used on a variety of computing platforms, such as hybrid clusters, and high-end workstations using Docker and Singularity images.Using HPC-GVCW, we called population panel variants for the latest high-quality genome references and created 25 immediately applicable datasets with an average of 27.3 M, 32.6 M, 169.9 M, and 16.2 M SNPs for rice (16 population panel references), sorghum (4), maize (3), and soybean (2), respectively.Analysis of a 16-genome rice reference panel revealed ~ 2.3 M novel SNPs relative to the IRGSP RefSeq, which equates to an approximate 8% overall increase in SNP discovery that can be applied immediately to precise molecular-assisted selection (MAS) breeding programs and functional analyses.
SNP annotation
SNPs located in coding regions across the 25 reference genome data sets were identified using their respective annotation files, and functional SNPs were predicted using SnpEff (v5.0e) [45].
Structural variation (SV) update across the 16-genome rice population reference panel (RPRP)
In 2020, we published an index of large structure variations (> 50 bp, SVs) across the 16-genome RPRP that included the MH63RS2 and ZS97RS2 genome assemblies [40].Here, we updated this index using the latest gap-free genome assemblies for these genomes -i.e., MH63RS3 and ZS97RS3 [65] -using the same methods as previously described.To validate this updated SV index, we randomly selected 50 insertions and 50 deletions across the 16 rice genome (RPRP), using the IRGSP RefSeq as the reference and the remaining 15 rice genomes as queries, which included a total of 1,500 entries ((50 + 50) × 15 = 1500).
We then manually validated each SV with alignment information in the Integrative Genomics Viewer (IGV) using raw reads and alignment blocks with Nucmer [66].SVs were considered valid if the two methods could identify the identical insertion or deletion and resulted in 94.6% of the insertions and 99.3% of the deletions being validated as true SVs.
Homologous gene identification across the 16-genome Rice Population Reference Panel (RPRP) based on sequence alignment and syntenic position
As with SVs above, we also updated our rice gene index (RGI) using updated MH63RS3 and ZS97RS3 gene annotations with identical pipeline [41].Briefly, homologous gene sets across the 16-genome RPRP were identified using GeneTribe software [67], by combining protein sequence similarity and collinearity (i.e., synteny) information.Homologous relationships included "reciprocal best hits" (RBHs), "single-side best hits" (SBHs), one-to-many, and singletons.Based on the one-to-one relationships (both RBH and SBH), and considering the collinearity blocks, we removed redundant homologous gene groups to obtain 79,111 non-redundant homologous gene groups.Finally, these non-redundant homologous gene groups were clustered with the "Connected Graph Algorithm" [68] to obtain 41,137 homologous gene groups.
Rice pan-genome SNP analysis
Using the updated SV and RGI data sets in combination with the 16-genome RPRP SNP data set, we conducted a pan-genome SNP analysis to classify genomic regions into core, dispensable, genome-specific, and genome-absent regions [69].Core regions are defined as sequences that are present in all 16 RPRP genomes.Dispensable regions are defined as sequences that are observed in 2 to 15 of the 16 RPRP genomes.Genomespecific regions are defined as sequences that are present in only one of the 16 RPRP genomes, but absent in the remaining 15.Genome-absent regions are defined as sequences that are not present in one of the 16 RPRP genomes, but are present in at least one of the other 15 genomes.For the presence and absence of genes, we classified homologous gene groups as core, dispensable, specific, and absent genes, representing the same logic flow as large SVs.Bedtools (v2.30.0) [70] subcommand "subtract" was used for core region identification, and the subcommand "intersect" was used for SNP extraction.
Chromatin accessibility of novel SNPs in open chromatin regions
Accessible Chromatin, combined with high-throughput sequencing (ATAC-seq) is widely used as one of the mainstream OCR detection methods [51,71,72].In this study, ATAC-seq data from 6 tissues of ZS97RS3, i.e., flag leaf, flower, lemma, panicle, root, and young leaf were obtained from NCBI BioProject PRJNA705005 [73].In the initial steps of analyzing raw ATAC-seq data, we conducted quality control using FastQC [74].This quality control process involved evaluating the quality of sequenced bases, average GC content, and the presence of repetitive sequences.Notably, we observed variations in the content of the first four bases at the 5′ end of each sample.To address this issue, we further refined our data by using fastp (v0.12.4) [75] to remove low-quality data and trim 20 base pairs from the 5' ends.Subsequently, we employed BWA's mem [76] algorithm to align the sequencing data with the ZS97RS3 rice genome while filtering out reads that mapped to mitochondrial and chloroplast DNA.Peak regions of open chromatin regions (OCRs) within the ATAC-seq data were identified using MACS2 [77] with specific parameters: "-shift -100 -extsize 200 -nomodel -B -SPMR -g 3.0e8 -call-summits -p 0.01." Following this, peak call results from each individual sample were combined using BEDtools (v2.26.0) [70] with default settings for merging.
To assess the potential functional impact of novel SNPs, we employed the intragroup Basenji model to study their variation scores [51].Based on the Basenji model training, we predict the effect of variation in different tissues on chromatin accessibility (CA) in neighboring genomic regions.For each variation, we construct two sequences that contain the mutation site and the sequences around it, differing only at the mutation site.We then predict CA in each of these two sequences and score the effect of variants by comparing the CA differences between the two genotypes in the 1 kb region around the mutation site.The higher the score of the SNP, the greater the effect on CA in open chromatin regions.
Fig. 2
Fig.2Benchmarking of the Phase 3 GIS parallelization HPC-GVCW as compare with the standard GATK pipeline using 30 resequenced rice accessions mapped to a single reference genome, a execution time and b CPU hours (execution time × number of nodes) for job completion.Notes: Comparisons were tested between the standard GATK pipeline without chunks using 1 node (blue dots), and HPC-GVCW using a range of computing nodes chunked length combinations, i.e., chunks sizes of 10 Kb, 100 Kb, 200 Kb, 500 Kb, 1 Mb, 5 Mb, 10 Mb, 20 Mb, and chromosome level, which use 2342, 237, 120, 50, 27, 8, 6, 5, and 4 nodes, respectively (yellow dots)
Table 1
Number of SNPs identified across four major crop species using their most recent public genome releases
Table 1 (continued) Species Reference genome Acronyms GenBank ID Number of SNPs SNPs in exons SNPs in 3′ UTR SNPs in 5′ UTR 5′ UTR premature start codon gain variant Missense variant Start lost Stop gained Stop lost
[15]ed to the 16-genome RPRP dataset[15]revealed a core genome of 314.1 Mb, an average dispensable genome of 56.55 Mb, and a private genome of ~ 745 Kb/genome (see Methods for definitions), that contain ~ 22.4 M, 3.2 M and 33.8 K SNPs, respectively (Additional file 2: Fig. | 6,648.6 | 2024-01-25T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Geometric quadratic Chabauty
Determining all rational points on a curve of genus at least 2 can be difficult. Chabauty's method (1941) is to intersect, for a prime number p, in the p-adic Lie group of p-adic points of the jacobian, the closure of the Mordell-Weil group with the p-adic points of the curve. If the Mordell-Weil rank is less than the genus then this method has never failed. Minhyong Kim's non-abelian Chabauty programme aims to remove the condition on the rank. The simplest case, called quadratic Chabauty, was developed by Balakrishnan, Dogra, Mueller, Tuitman and Vonk, and applied in a tour de force to the so-called cursed curve (rank and genus both 3). This article aims to make the quadratic Chabauty method small and geometric again, by describing it in terms of only `simple algebraic geometry' (line bundles over the jacobian and models over the integers).
Introduction
Faltings proved in 1983, see [16], that for every number field K and every curve C over K of genus at least 2, the set of K-rational points C(K) is finite. However, determining C(K), in individual cases, is still an unsolved problem. For simplicity, we restrict ourselves in this article to the case K = Q.
Chabauty's method (1941) for determining C(Q) is to intersect, for a prime number p, in the p-adic Lie group of p-adic points of the jacobian, the closure of the Mordell-Weil group with the p-adic points of the curve. There is a fair amount of evidence (mainly hyperelliptic curves of small genus, see [3]) that Chabauty's method, in combination with other methods such as the Mordell-Weil sieve, does determine all rational points when r < g, with r the Mordell-Weil rank and g the genus of C.
For a general introduction to Chabauty's method and Coleman's effective version of it, we highly recommend [24], and, for an implementation of it that is 'geometric' in the sense of this article, to [17], in which equations for the curve embedded in the Jacobian are pulled back via local parametrisations of the closure of the Mordell-Weil group.
Minhyong Kim's non-abelian Chabauty programme aims to remove the condition that r < g. The 'non-abelian' refers to fundamental groups; the fundamental group of the jacobian of a curve is the abelianised fundamental group of the curve. The most striking result in this direction is the so-called quadratic Chabauty method, applied in [5], a technical tour de force, to the so-called cursed curve (r = g = 3). For more details we recommend the introduction of [5].
This article aims to make the quadratic Chabauty method small and geometric again, by describing it in terms of only 'simple algebraic geometry' (line bundles over the jacobian, models over the integers, and biextension structures). The main result is Theorem 4.12. It gives a criterion for a given list of rational points to be complete, in terms of points with values in Z/p 2 Z only. Section 2 describes the geometric method in less than 3 pages, Sections 3-5 give the necessary theory, Sections 6-7 give descriptions that are suitable for computer calculations, and Section 8 treats an example with r = g = 2 and 14 rational points. As explained in the remarks following Theorem 4.12, we expect that this approach will make it possible to treat many more curves. Section 9.1 gives some remarks on the fundamental groups of the objects we use. They are subgroups of higher dimensional Heisenberg groups, where the commutator pairing is the intersection pairing of the first homology group of the curve. Section 9.2 reproves the finiteness of C(Q), for C with r < g + ρ − 1, with ρ the rank of the Z-module of symmetric endomorphisms of the jacobian of C. It also shows that a version of Theorem 4.12 that uses higher p-adic precision will always give a finite upper bound for C(Q). Section 9.3 gives, through an appropriate choice of coordinates that split the Poincaré biextension, the relation between our geometric approach and the p-adic heights used in the cohomological approach.
Already for the case of classical Chabauty (working with J instead of T , and under the assumption that r < g), where everything is linear, the criterion of Theorem 4.12 can be useful; this has been worked out and implemented in [30]. We recommend this work as a gentle introduction into the geometric approach taken in this article. A generalisation from Q to number fields is given in [13]. For a generalisation of the cohomological approach, see [2] (quadratic Chabauty) and [14] (non-abelian Chabauty).
Although this article is about geometry, it contains no pictures. Fortunately, many pictures can be found in [19], and some in [15].
Algebraic geometry
Let C be a scheme over Z, proper, flat, regular, with C Q of dimension one and geometrically connected. Let n be in Z ≥1 such that the restriction of C to Z[1/n] is smooth. Let g be the genus of C Q . We assume that g ≥ 2 and that we have a rational point b ∈ C(Q); it extends uniquely to a b ∈ C(Z). We let J be the Néron model over Z of the jacobian Pic 0 C Q /Q . We denote by J ∨ the Néron model over Z of the dual J ∨ Q of J Q , and λ : J → J ∨ the isomorphism extending the canonical principal polarisation of J Q . We let P Q be the Poincaré line bundle on J Q × J ∨ Q , trivialised on the union of {0} × J ∨ Q and J Q × {0}. Then the Poincaré torsor is the G m -torsor on J Q × J ∨ Q defined as For every scheme S over J Q × J ∨ Q , P × Q (S) is the set of isomorphisms from O S to (P Q ) S , with a free and transitive action of O S (S) × . Locally on S for the Zariski topology, (P × Q ) S is trivial, and P × Q is represented by a scheme over J Q × J ∨ Q . The theorem of the cube gives P × Q the structure of a biextension of J Q and J ∨ Q by G m , a notion for the details of which we recommend Section I.2.5 of [26], Grothendieck's Exposés VII and VIII [29], and references therein. This means the following. For S a Q-scheme, x 1 and x 2 in J Q (S), and y in J ∨ Q (S), the theorem of the cube gives a canonical isomorphism of O S -modules This induces a morphism of schemes as follows. For any S-scheme T , and z 1 in ((x 1 , y) * P × Q )(T ) and z 2 in ((x 2 , y) * P × Q )(T ), we view z 1 and z 2 as nowhere vanishing sections of the invertible O T -modules (x 1 , y) * P Q and (x 2 , y) * P Q . The tensor product of these two then gives an element of ((x 1 + x 2 , y) * P × Q )(T ). This gives P × Q → J ∨ Q the structure of a commutative group scheme, which is an extension of J Q by G m , over the base J ∨ Q . We denote this group law, and the one on J Q × J ∨ Q , as (2.4) (z 1 , z 2 ) z 1 + 1 z 2 ((x 1 , y), (x 2 , y)) (x 1 , y) + 1 (x 2 , y) (x 1 + x 2 , y) .
In the same way, P × Q → J Q has a group law + 2 that makes it an extension of J ∨ Q by G m over the base J Q . In this way, P × Q is both the universal extension of J Q by G m and the universal extension of J ∨ Q by G m . The final ingredient of the notion of biextension is that the two partial group laws are compatible in the following sense. For any Q-scheme S, for x 1 and x 2 in J Q (S), y 1 and y 2 in J ∨ Q (S), and, for all i and j in {1, 2}, z i,j in ((x i , y j ) * P × Q )(S), we have (2.5) (z 1,1 + 1 z 2,1 ) + 2 (z 1,2 + 1 z 2,2 ) (z 1,1 + 2 z 1,2 ) + 1 (z 2,1 + 2 z 2,2 ) (x 1 + x 2 , y 1 ) + 2 (x 1 + x 2 , y 2 ) (x 1 , y 1 + y 2 ) + 1 (x 2 , y 1 + y 2 ) with the equality in the upper line taking place in ((x 1 + x 2 , y 1 + y 2 ) * P × Q )(S). Now we extend the geometry above over Z. We denote by J 0 the fibrewise connected component of 0 in J, which is an open subgroup scheme of J, and by Φ the quotient J/J 0 , which is anétale (not necessarily separated) group scheme over Z, with finite fibres, supported on Z/nZ. Similarly, we let J ∨0 be the fibrewise connected component of J ∨ . Theorem 7.1, in Exposé VIII of [29] gives that P × Q extends uniquely to a G m -biextension (Grothendieck's pairing on component groups is the obstruction to the existence of such an extension). Note that in this case the existence and the uniqueness follow directly from the requirement of extending the rigidification on J Q × {0}. For details see Section 6.7. Our base point b ∈ C(Z) gives an embedding j b : C Q → J Q , which sends, functorially in Q-schemes S, an element c ∈ C Q (S) to the class of the invertible O C S -module O C S (c − b). Then j b extends uniquely to a morphism (2.7) where C sm is the open subscheme of C consisting of points at which C is smooth over Z. Note that C Q (Q) = C(Z) = C sm (Z). Our next step is to lift j b , at least on certain opens of C sm , to a morphism to a G ρ−1 m -torsor over J, where ρ is the rank of the free Z-module Hom(J Q , J ∨ Q ) + , the Z-module of self-dual morphisms from J Q to J ∨ Q . This torsor will be the product of pullbacks of P × via morphisms with f : J → J ∨ a morphism of group schemes, c ∈ J ∨ (Z), tr c the translation by c, m the least common multiple of the exponents of all Φ(F p ) with p ranging over all primes, and m· the multiplication by m map on J ∨ . For such a map m· • tr c • f , j b : C Q → J Q can be lifted to (id, m· • tr c • f ) * P × Q if and only if j * b (id, m· • tr c • f ) * P × Q is trivial. The degree of this G m -torsor on C Q is minus the trace of λ −1 • m· • (f + f ∨ ) acting on H 1 (J(C), Z). For example, for f = λ the degree is −4mg. Note that j b : C Q → J Q induces (2.9) j (see [25], Propositions 2.7.9 and 2.7.10). This implies that for f such that this degree is zero, there is a unique c such that j * b (id, tr c • f ) * P × Q is trivial on C Q , and hence also its mth power j * b (id, m· • tr c • f ) * P × Q . The map (2.10) Hom(J Q , J ∨ Q ) −→ Pic(J Q ) −→ NS J Q /Q (Q) = Hom(J Q , J ∨ Q ) + sending f to the class of (id, f ) * P Q sends f to f + f ∨ , hence its kernel is Hom(J Q , J ∨ Q ) − , the group of antisymmetric morphisms. But actually, for f antisymmetric, its image in Pic(J Q ) is already zero (see for example [6] and the references therein). Hence the image of Hom(J Q , J ∨ Q ) in Pic(J Q ) is free of rank ρ, and its subgroup of classes with degree zero on C Q is free of rank ρ−1. Let f 1 , . . . , f ρ−1 be elements of Hom(J Q , J ∨ Q ) whose images in Pic(J Q ) form a basis of this subgroup, and let c 1 , . . . , c ρ−1 be the corresponding elements of J ∨ (Z).
By construction, for each i, the morphism j b : C Q → J Q lifts to (id, m· • tr c i • f i ) * P × Q , unique up to Q × . Now we spread this out over Z, to open subschemes U of C sm obtained by removing, for each q dividing n, all but one irreducible components of C sm Fq , with the remaining irreducible component geometrically irreducible. For such a U, the morphism Pic(U) → Pic(C Q ) is an isomorphism, and O C (U) = Z, thus, for each i, there is a lift At this point we can explain the strategy of our approach to the quadratic Chabauty method. Let T be the G ρ−1 m -torsor on J obtained by taking the product of all Then each c ∈ C Q (Q) = C sm (Z) lies in one of the finitely many U(Z)'s. For each U, we have a lift j b : U → T , and, for each prime number p, j b (U(Z)) is contained in the intersection, in T (Z p ), of j b (U(Z p )) and the closure T (Z) of T (Z) in T (Z p ) with the p-adic topology. Of course, one expects this closure to be of dimension at most r := rank(J(Q)), and therefore one expects this method to be successful if r < g + ρ − 1, the dimension of T (Z p ). The next two sections make this strategy precise, giving first the necessary p-adic formal and analytic geometry, and then the description of T (Z) as a finite disjoint union of images of Z r p under maps constructed from the biextension structure.
3 From algebraic geometry to formal geometry Let p be a prime number. Given X a smooth scheme of relative dimension d over Z p and x ∈ X(F p ) let us describe the set X(Z p ) x of elements of X(Z p ) whose image in X(F p ) is x. The smoothness implies that the maximal ideal of O X,x is generated by p together with d other elements t 1 , . . . , t d . In this case we call p, t 1 , . . . , t d parameters at x; if moreover x l ∈ X(Z p ) x is a lift of x such that t 1 (x l ) = . . . t d (x l ) = 0 then we say that the t i 's are parameters at x l . The t i can be evaluated on all the points in X(Z p ) x , inducing a bijection t : We get a bijection This bijection can be interpreted geometrically as follows. Let π : X x → X denote the blow up of X in x. By shrinking X, X is affine and the t i are regular on X, t : With these definitions, we have The affine space ( X p x ) Fp is canonically a torsor under the tangent space of X Fp at x. This construction is functorial. Let Y be a smooth Z p -scheme, f : X → Y a morphism over Z p , and y : If this tangent map is injective, and d x and d y denote the dimensions of X Fp at x and of Y Fp at y, then there are t 1 , . . . , t dy in O Y,y such that p, t 1 , . . . , t dy are parameters at y, and such that . . ,t dx , with kernel generated byt dx+1 , . . . ,t dy .
Integral points, closure and finiteness
Let us now return to our original problem. The notation U, J, T , j b , j b , r, ρ etc., is as at the end of Section 2. We assume moreover that p does not divide n (n as in the start of Section 2) and that p > 2 (for p = 2 everything that follows can probably be adapted by working with residue polydiscs modulo 4).
Let u be in U(F p ), and t := j b (u). We want a description of the closure T (Z) t of T (Z) t in T (Z p ) t . Using the biextension structure of P × , we will produce, for each element of J(Z) j b (u) , an element of T (Z) over it. Not all of these points are in T (Z) t , but we will then produce a subset of T (Z) t whose closure is T (Z) t .
If T (Z) t is empty then T (Z) t is empty, too. So we assume that we have an element t ∈ T (Z) t and we define x t ∈ J(Z) to be the projection of t. Let f = (f 1 , . . . , f ρ−1 ) : J → J ∨,ρ−1 , let c = (c 1 , . . . , c ρ−1 ) ∈ J ∨,ρ−1 (Z). We denote by P ×,ρ−1 the product over J × (J ∨0 ) ρ−1 of the ρ−1 G m -torsors obtained by pullback of P × via the projections to J × J ∨0 ; it is a biextension of J and (J ∨0 ) ρ−1 by G ρ−1 m , and T = (id, m· • tr c • f ) * P ×,ρ−1 . We choose a basis x 1 , . . . , x r of the free Z-module J(Z) 0 , the kernel of J(Z) → J(F p ). For each i, j ∈ {1, . . . , r} we choose P i,j , R i, t , and S t,j in P ×,ρ−1 (Z) whose images in and (x t , f (mx j )): For each such choice there are 2 ρ−1 possibilities. For each n ∈ Z r we use the biextension structure on P ×,ρ−1 → J × (J ∨0 ) ρ−1 to define the following points in P ×,ρ−1 (Z), with specified images in (J × (J ∨0 ) ρ−1 )(Z): where 1 and · 1 denote iterations of the first partial group law + 1 as in (2.4), and analogously for the second group law. We define, for all n ∈ Z r , which is mapped to Hence D t (n) is in T (Z), and its image in J(F p ) is j b (u). We do not know its image in T (F p ). We claim that for n in (p−1)Z r , D t (n) is in T (Z) t . Let n ′ be in Z r and let n = (p−1)n ′ . Then, in the trivial F ×,ρ−1 p -torsor P ×,ρ−1 (j b (u), 0), on which + 2 is the group law, we have: Similarly, in P ×,ρ−1 (0, (m· • tr c • f )(j b (u))) = F ×,ρ−1 p , we have B t (n) = 1, and, similarly, in P ×,ρ−1 (0, 0) = F ×,ρ−1 p , we have C(n) = 1. So, with apologies for the mix of additive and multiplicative notations, in P ×,ρ−1 (F p ) we have mapping to the following element in (J × J ∨0,ρ−1 )(F p ): We have proved our claim that D t (n) ∈ T (Z) t . So we now have the map The following theorem will be proved in Section 5.
4.10
Theorem Let x 1 , . . . , x g be in O J,j b (u) such that together with p they form a system of parameters of O J,j b (u) , and let v 1 , . . . , v ρ−1 be in O T,t such that p, x 1 , . . . , x g , v 1 , . . . , v ρ−1 are parameters of O T,t . As in Section 3 these parameters, divided by p, give a bijection The composition of κ Z with the map (4.10.1) is given by uniquely determined κ 1 , . . . , κ g+ρ−1 in O(A r Zp ) ∧p = Z p z 1 , . . . , z r . The images in F p [z 1 , . . . , z r ] of κ 1 , . . . , κ g are of degree at most 1, and the images of κ g+1 , . . . , κ g+ρ−1 are of degree at most 2. The map κ Z extends uniquely to the continuous map and the image of κ is T (Z) t .
Now the moment has come to confront U(Z p ) u with T (Z) t . We have j b : U → T , whose tangent map (mod p) at u is injective (here we use that C Fp is smooth over F p ). Then, as at the end of Section 3, j b : U p u → T p t is, after reduction mod p, an affine linear embedding of codimension g+ρ−2, j b * : O( T p t ) ∧p → O( U p u ) ∧p is surjective and its kernel is generated by elements f 1 , . . . , f g+ρ−2 (we apologise for using the same letter as for the components of f : J → J ∨,ρ−1 ), whose images in F p ⊗O( T p t ) are of degree at most 1, and such that f 1 , . . . , f g−1 are in O( J p j b (u) ) ∧p . The pullbacks κ * f i are in Z p z 1 , . . . , z r ; let I be the ideal in Z p z 1 , . . . , z r generated by them, and let (4.11) A := Z p z 1 , . . . , z r /I .
Then the elements of Z r p whose image is in U(Z p ) u are zeros of I, hence morphisms of rings from A to Z p , and hence from the reduced quotient A red to Z p .
Theorem
For i ∈ {1, . . . , g+ρ−2}, let κ * f i be the image of κ * f i in F p [z 1 , . . . , z r ], and let I be the ideal of F p [z 1 , . . . , z r ] generated by them. Then κ * f 1 , . . . , κ * f g−1 are of degree at most 1, and κ * f g , . . . , κ * f g+ρ−2 are of degree at most 2. Assume that A := A/pA = F p [z 1 , . . . , z r ]/I is finite. Then A is the product of its localisations A m at its finitely many maximal ideals m. The sum of the dim Fp A m over the m such that A/m = F p is an upper bound for the number of elements of Z r p whose image under κ is in U(Z p ) u , and also an upper bound for the number of elements of U(Z) with image u in U(F p ).
Proof As every f i is of degree at most 1 in x 1 , . . . , x g , v 1 , . . . , v ρ−1 , every κ * f i is an F p -linear combination of κ 1 , . . . , κ g+ρ−1 , hence of degree at most 2. For i < g, f i is a linear combination of x 1 , . . . , x g , and therefore κ * f i is of degree at most 1.
We claim that A is p-adically complete. More generally, let R be a noetherian ring that is J-adically complete for an ideal J, and let I be an ideal in R. The map from R/I to its J-adic completion (R/I) ∧ is injective ([1, Thm.10.17]). As J-adic completion is exact on finitely generated R-modules ([1, Prop.10.12]), it sends the surjection R → R/I to a surjection R = R ∧ → (R/I) ∧ (see [1,Prop.10.5] for the equality R = R ∧ ). It follows that R/I → (R/I) ∧ is surjective. Now we assume that A is finite. As A is p-adically complete, A is the limit of the system of its quotients by powers of p. These quotients are finite: for every m ∈ Z ≥1 , A/p m+1 A is, as abelian group, an extension of A/pA by a quotient of A/p m A. As Z p -module, A is generated by any lift of an F p -basis of A. Hence A is finitely generated as Z p -module.
The set of elements of Z r p whose image under κ is in U(Z p ) is in bijection with the set of Z p -algebra morphisms Hom(A, Z p ). As A is the product of its localisations A m at its maximal ideals, Hom(A, Z p ) is the disjoint union of the Hom(A m , Z p ). For each m, Hom(A m , Z p ) has at most rank Zp (A m ) elements, and is empty if F p → A/m is not an isomorphism. This establishes the upper bound for the number of elements of Z r p whose image under κ is in U(Z p ). By Theorem 4.10, the elements of U(Z) with image u in U(F p ) are in T (Z) t , and therefore of the form κ(x) with x ∈ Z r p such that κ(x) is in U(Z p ) u . This establishes the upper bound for the number of elements of U(Z) with image u in U(F p ).
We include some remarks to explain how Theorem 4.12 can be used, and what we hope that it can do.
Remark
The κ * f i in Theorem 4.12 can be computed from the reduction F r p → T (Z/p 2 Z) of κ Z and (to get the f i ) from j b : U(Z/p 2 Z) u → T (Z/p 2 Z) t . For this, one does not need to treat T and J as schemes, one just computes with Z/p 2 Z-valued points. Now assume that r ≤ g + ρ − 2. If, for some prime p, the criterion in Theorem 4.12 fails (that is, A is not finite), then one can try the next prime. We hope (but also expect) that one quickly finds a prime p such that A is finite for every U and for every u in U(F p ) such that j b (u) is in the image of T (Z) → T (F p ). By the way, note that our notation in Theorem 4.12 does not show the dependence on U and u of j b , κ Z , κ and the f i . Instead of varying p, one could also increase the p-adic precision, and then the result of Section 9.2 proves that one gets an upper bound for the number of elements of U(Z).
Remark
If r < g + ρ − 2 then we think that it is likely (when varying p), for dimension reasons, unless something special happens as in [3] or Remark 8.9 of [4], that, for all u ∈ U(F p ), the upper bound in Theorem 4.12 for the number of elements of U(Z) with image u in U(F p ) is sharp. For a precise conjecture in the context of Chabauty's method, see the "Strong Chabauty" Conjecture in [31].
Remark
Suppose that r = g + ρ − 2. Then we expect, for dimension reasons, that it is likely (when varying p) that, for some u ∈ U(F p ), the upper bound in Theorem 4.12 for the number of elements of U(Z) with image u in U(F p ) is not sharp. Then, as in the classical Chabauty method, one must combine the information gotten from several primes, analogous to 'Mordell-Weil sieving'; see [27]. In our situation, this amounts to the following. Suppose that we are given a subset B of U(Z) that we want to prove to be equal to U(Z). Let B ′ be the complement in U(Z) of B. For every prime p > 2 not dividing n, Theorem 4.12 gives, interpreting A as in the end of the proof of Theorem 4.12, a subset O p of J(Z), with O p a union of cosets for the subgroup p· ker(J(Z) → J(F p )), that contains j b (B ′ ). Then one hopes that, taking a large enough finite set S of primes, the intersection of the O p for p in S is empty.
Parametrisation of integral points, and power series
In this section we give a proof of Theorem 4.10. The main tools here are the formal logarithm and formal exponential of a commutative smooth group scheme over a Q-algebra ( [20], Theorem 1): they give us identities like n·g = exp(n· log g) that allow us to extend the multiplication to elements n of Z p .
The evaluation map from Z p z 1 , . . . , z n to the set of maps Z n p → Z p is injective (induction on n, non-zero elements of Z p z have only finitely many zeros in Z p ).
We say that a map f : Z n p → Z m p is given by integral convergent power series if its coordinate functions are in Z p z 1 , . . . , z n = O(A n Zp ) ∧p . This property is stable under composition: composition of polynomials over Z/p k Z gives polynomials.
Logarithm and exponential
Let p be a prime number, and let G be a commutative group scheme, smooth of relative dimension d over a scheme S smooth over Z p , with unit section e in G(S). For any s in S(F p ), G(Z p ) e(s) is a group fibred over S(Z p ) s . The fibres have a natural Z p -module structure: G(Z p ) e(s) is the limit of the G(Z/p n Z) e(s) (n ≥ 1), S(Z p ) s is the limit of the S(Z/p n Z) s , and for each n ≥ 1, the fibres of G(Z/p n Z) e(s) → S(Z/p n Z) s are commutative groups annihilated by p n−1 . Let T G/S be the relative (geometric) tangent bundle of G over S. Then its pullback T G/S (e) by e is a vector bundle on S of rank d.
Lemma
In this situation, and with n the relative dimension of S over Z p , the formal logarithm and exponential of G base changed to Q ⊗ O S,s converge to maps , that are each other's inverse and, after a choice of parameters for G → S at e(s) as in (3.1), are given by n + d elements of O( G p e(s) ) ∧p and n + d elements of O( T G/S (e) p 0(s) ) ∧p . For a in Z p and g in G(Z p ) e(s) we have a·g = exp(a· log g), and, after a choice of parameters for G → S at e(s), this map Z p × G(Z p ) e(s) → G(Z p ) e(s) is given by n + d elements of Zp × Zp G p e(s) ) ∧p . The induced morphism A 1 Fp × ( G p e(s) ) Fp → ( G p e(s) ) Fp , where ( G p e(s) ) Fp is viewed as the product of T S Fp (s) and T G/S (e(s)), is a morphism over T S Fp (s), bilinear in A 1 Fp and T G/S (e(s)).
Proof Let t 1 , . . . , t n be in O S,s such that p, t 1 , . . . , t n are parameters at s. Then we have a bijection (5.1.2)t : S(Z p ) s → Z n p , a → p −1 ·(t 1 (a), . . . , t n (a)) . Similarly, let x 1 , . . . , x d be generators for the ideal I e(s) of e in O G,e(s) . Then p, the t i and the x j together are parameters for O G,e(s) , and give the bijection The dx i form an O S,s -basis of Ω 1 G/S (e) s , and so give translation invariant differentials ω i on G O S,s . As G is commutative, for all i, dω i = 0 ( [20], Proposition 1.3). We also have the dual O S,s -basis ∂ i of T G/S (e) and the bijection Then log is given by elements log i in ( ]] whose constant term is 0, uniquely determined (Proposition 1.1 in [20]) by the equality Hence the formula from calculus, we have, for all i and J, with |J| denoting the total degree of x J , The claim about convergence and definition of log : G(Z p ) e(s) → (T G/S (e))(Z p ) 0(s) , is now equivalent to having an analytic bijection Z n+d p → Z n+d p given by We have, for each i, For each i, this expression is an element of Z p t 1 , . . . ,t n ,x 1 , . . . ,x d = O( G p e(s) ) ∧p , even when p = 2, because for each J, |J| log i,J is in O S,s , which is contained in Z p t 1 , . . . ,t n , and the function Z ≥1 → Q p , r → p r−1 /r has values in Z p and converges to 0. The existence and analyticity of log is now proved (even for p = 2). As p > 2, the image of (5.1.9) in F p ⊗O( G p e(s) ) ∧p isx i , and on the first n coordinates, log is the identity, so, by applying Hensel modulo powers of p, log is invertible, and the inverse is also given by n + d elements of O( T G/S (e) p 0(s) ) ∧p . The function Z p × G(Z p ) e(s) → G(Z p ) e(s) , (a, g) → exp(a· log g) is a composition of maps given by integral convergent power series, hence it is also of that form.
Parametrisation by power series
The notation and assumptions are as in the beginning of Section 4, in particular, p > 2 and T is as defined in (2.12). We have a t in T (F p ), with image j b (u) in J(F p ), and at in T (Z) lifting t. For every Q in T (Z) mapping to j b (u) in J(F p ) there are unique ε ∈ Z ×,ρ−1 and n ∈ Z r such that Q = ε·Dt(n): the image of Q in J(Z) is in J(Z) j b (u) , hence differs from the image xt in J(Z) oft by an element of J(Z) 0 (with here 0 ∈ J(F p )), i n i x i for a unique n ∈ Z r , hence Dt(n) and Q are in T (Z) and have the same image in J(Z), and that gives the unique ε. So we have a bijection But a problem that we are facing is that the map Z r → T (F p ) j b (u) sending n to the image of Dt(n) depends on the (unknown) images of the P i,j , R i,t and St ,j from (4.1) in P ×,ρ−1 (F p ), and so we do not know for which n and ε the point Then for all n in Z r , because Dt((p − 1)·n) maps to t in T (F p ). Moreover for every Q in T (Z) t there is a unique n ∈ Z r and a unique ε ∈ Z ×,ρ−1 such that Q = ε·Dt(n) = ξ(n)·Dt(n) = D ′ (n). Hence The following lemma proves the existence and uniqueness of the κ i of Theorem 4.10, and the claims on the degrees of the κ i .
Lemma
After any choice of parameters of O T,t as in Theorem 4.10, D ′ is given by For all i in {1, . . . , g + ρ − 1} we let κ ′ i be the reduction mod p of κ ′ j . Then κ ′ 1 , . . . , κ ′ g are of degree at most 1, and the remaining κ ′ j are of degree at most 2.
Proof In order to get a formula for D ′ (n), we introduce variants of the P i,j , R i, t , and S t,j as follows. The images in (J × (J ∨0 ) ρ−1 )(F p ) of these points are of the form (0, * ), (0, * ), and ( * , 0), respectively. Hence the fibers over them of P ×,ρ−1 are rigidified, that is, equal to F ×,ρ−1 p . We define their variants P ′ i,j , R ′ i, t , and S ′ t,j in P ×,ρ−1 (Z p ) to be the unique elements in their orbits under F ×,ρ−1 p whose images in P ×,ρ−1 (F p ) are equal to the element 1 in F ×,ρ−1 p . Replacing, in (4.2) and (4.3), these P i,j , R i, t , and S t,j by P ′ i,j , R ′ i, t , and S ′ t,j gives variants A ′ , B ′ and C ′ , and using these in (4.4) gives a variant D ′ t (n) of 5.2.2. Then, for all n in Z r , D ′ t (n) and D ′ (n) (as in (5.2.2)) are equal, because both are in P ×,ρ−1 (Z p ) t , and in the same F ×,ρ−1 p -orbit. Hence we have, for all n in Z r : This shows how the map n → D ′ (n) is built up from the two partial group laws + 1 and + 2 on P ×,ρ−1 , and the iterations · 1 and · 2 . Lemma 5.1.1 gives that the iterations are given by integral convergent power series. The functoriality in Section 3 gives that the maps induced by + 1 and + 2 on residue polydisks are given by integral convergent power series. Stability under composition then gives that n → D ′ (n) is given by elements (after choosing the necessary parameters) are all of degree at most 1. The same holds for B ′ . We define Then the mod p coordinate functions of C ′ 2 , elements of F p [x 1 , . . . , x r , y 1 , . . . , y r ], are linear in the x i , and in the y j . Hence of degree at most 2, and the same follows for the mod p coordinate functions of C ′ . However, as the first ρg parameters for P ×,ρ−1 come from J × J ∨ρ−1 , and the 1st and 2nd partial group laws there act on different factors, the first ρg mod p coordinate functions of C ′ are in fact linear. As D ′ is obtained by summing, using the partial group laws, the results of A ′ , B ′ and C ′ , we conclude that κ ′ 1 , . . . , κ ′ g are of degree at most 1, and the remaining κ j are of degree at most 2. The same holds then for all κ j .
The p-adic closure
. So together we have: We have extended D ′ to a continuous map Z r p → T (Z p ) t . As Z r p is compact, D ′ (Z r p ) is closed in T (Z p ) t . As Z r and (p − 1)Z r are dense in Z r p , the closures of their images under D ′ are both equal to D ′ (Z r p ), and equal to κ(Z r p ). This finishes the proof of Theorem 4.10.
Explicit description of the Poincaré torsor
The aim of this section is to give explicit descriptions of the Poincaré torsor P × on J × J ∨,0 and its partial group laws, to be used for doing computations when applying Theorem 4.12.
The main results are as follows. Proposition 6.3.2 describes the fibre of P over a point of J × J ∨,0 , say with values in Z/p 2 Z with p not dividing n or in Z[1/n], when the corresponding points of J and J ∨,0 are given by a line bundle on C (over Z/p 2 Z or Z[1/n], and rigidified at b) and an effective relative Cartier divisor on C (over Z/p 2 Z or Z[1/n]). It also translates the partial group laws of P × in terms of such data. Lemma 6.4.8 shows how to deal with linear equivalence of divisors. Lemma 6.5.4 makes the symmetry of P × explicit. Lemma 6.6.8 gives parametrisations of residue polydisks of P × (Z/p 2 Z), and Lemma 6.6.13 gives partial group laws on these residue polydisks. Proposition 6.8.7 describes the unique extension over J × J ∨,0 of the Poincaré torsor on (J × J ∨,0 ) Z[1/n] , in terms of line bundles and divisors on C. Finally, Proposition 6.9.3 describes the fibres of P over Z-points of J × J ∨,0 . In this article, we have chosen to use line bundles and divisors on curves for describing the jacobian and the Poincaré torsor. Another option is to use line bundles on curves and the determinant of coherent cohomology, as in Section 2 of [25]. We note that in Section 2, only the restriction of P to J 0 × J ∨,0 is treated, and moreover, under the assumption that C is nodal (that is, all fibres C Fp are reduced and have only the mildest possible singularities). Another choice we have made is to develop the basic theory of norms of G m -torsors under finite locally free morphisms in this article (Sections 6.1-6.2) and not to refer, for example, to EGA or SGA, because we think this is easier for the reader, and because this way we could adapt the definition directly to our use of it.
Norms
Let S be a scheme, f : S ′ → S be finite and locally free, say of rank n.
. Then the norm morphism is the composition . This is functorial in T : a morphism ϕ : T 1 → T 2 induces an isomorphism Norm S ′ /S (ϕ). It is also functorial for cartesian diagrams ( The norm functor (6.1.2) is multiplicative: such that, if U ⊂ S is open and t 1 and t 2 are in T 1 (U) and T 2 (U), then This construction is functorial for isomorphisms of invertible O S ′ -modules.
Norms along finite relative Cartier divisors
This part is inspired by [21], section 1.1. Let S be a scheme, let f : X → S be an S-scheme of finite presentation. A finite effective relative Cartier divisor on f : X → S is a closed subscheme D of X that is finite and locally free over S, and whose ideal sheaf I D is locally generated by a non-zero divisor (equivalently, I D is locally free of rank 1 as O X -module). For such a D and an invertible O X -module L, the norm of L along D is defined, using (6.1.5), as Then Norm D/S (L) is functorial for cartesian diagrams (X ′ → S ′ , L ′ ) → (X → S, L).
Lemma
Let f : X → S be a morphism of schemes that is of finite presentation. For D a finite effective relative Cartier divisor on f , the norm functor Norm D/S in (6.2.1) is multiplicative in L: Let D 1 and D 2 be finite effective relative Cartier divisors on f . Then the ideal sheaf I D 1 I D 2 ⊂ O X is locally free of rank 1, the closed subscheme D 1 + D 2 defined by it is a finite effective relative Cartier divisor on f . The norm functor in (6.2.1) is additive in D: Proof Let D 1 and D 2 be as stated. If V ⊂ X is open, and f i generates is not a zero-divisor because f 1 and f 2 are not. To show that D 1 + D 2 is finite over S, we replace S by an affine open of it, and then reduce to the noetherian case, using the assumption that f is of finite presentation. Then, (D 1 + D 2 ) red is the image of D 1,red D 2,red → X, and therefore is proper. Hence D 1 + D 2 is proper over S, and quasi-finite over S, hence finite over S. The short exact sequence (6.2.7) We prove (6.2.5), by proving the required statement about sheaves of groups. The diagram commutes, because multiplication by u on O D 1 +D 2 preserves the short exact sequence (6.2.7), multiplying on the sub and quotient by its images in O × D 1 and in O × D 2 ; note that the sub is an invertible O D 1 -module.
Explicit description of the Poincaré torsor of a smooth curve
Let g be in Z ≥1 , let S be a scheme, and π : C → S be a proper smooth curve, with geometrically connected fibres of genus g, with a section b ∈ C(S). Let J → S be its jacobian. On C × S J we have L univ , the universal invertible O-module of degree zero on C, rigidified at b.
Let d ≥ 0, and C (d) the dth symmetric power of C → S (we note that the quotient C d → C (d) is finite, locally free of rank d!, and commutes with base change on S). Then on C × S C (d) we have D, the universal effective relative Cartier divisor on C of degree d. Hence, on C × S J × S C (d) we have their pullbacks D J and L univ C (d) , giving us .
, rigidified at the zero-section of J, gives us a morphism of S-schemes C (d) to Pic J/S . The point db (the divisor d times the base point b) in C (d) (S) is mapped to 0, precisely because L univ is rigidified at b, and 6.2.5. Hence there is a unique morphism : with its rigidifications, is the same as N d . The following proposition tells us what the morphism is, and the next section tells us what the induced isomorphism is between the fibres of N d at points of J × C (d) with the same image in J × S J.
Proposition The pullback of
For c 1 and c 2 in C(S), we have and, as invertible O-modules on C × S C, with ∆ the diagonal and pr ∅ : C × S C → S the structure morphism, we have using Lemma 6.2.2.
For T an S-scheme and x 1 and x 2 in J(T ) given by O-modules L 1 and L 2 on C T , rigidified at b, and D in C (d) (T ), the isomorphism using Lemma 6.2.2.
Proof Let T be an S-scheme, and x be in J(T ). Then x corresponds to the invertible O- meaning that the pullback of (id × z) * P on J T rigidified at 0 by j b equals (id × x) * L univ on C T rigidified at b. Taking T := J and x the tautological point gives the first claim of the proposition.
The symmetry of M with its rigidifications follows from [25], (2.7.1) and Lemma 2.7.5, and (2.7.7), using 2.9. Now we prove (6.3.4). So let T and x be as above, and y = Σ(D) in J(T ) given by a relative divisor D of degree d on C T . As C d → C (d) is finite and locally free of rank d!, we may and do suppose that D is a sum of sections, say Then we have, functorially: Identities (6.3.5) and (6.3.6) follow directly from (6.3.4). Now we prove the claimed compatibility between (6.3.9) and (6.3.10). We do this by considering the case where L is universal, that is, base changing to J T and x the universal point. Then, on J T , we have 2 isomorphisms from Norm ( Hence it suffices to check that this element equals 1 at 0 ∈ J(T ). This amounts to checking that the 2 isomorphisms are equal for L = O C T with the standard rigidification at b. Then, both isomorphisms are the multiplication The compatibility between (6.3.7) and (6.3.8) is proved analogously.
6.3.12 Remark From Proposition 6.3.2 one easily deduces, in that situation, for T an Sscheme, x in J(T ) given by an invertible O-module L on C T , and D 1 and D 2 effective relative Cartier divisors on C T , of the same degree, a canonical isomorphism satisfying the analogous compatibilities as in Proposition 6.3.2. No rigidification of L at b is needed. In fact, for L 0 an invertible O T -module, we have Norm D 1 /T (π * L 0 ) = L ⊗d 0 , where π : C T → T is the structure morphism and d is the degree of D 1 . Hence the right hand side of (6.3.13) is independent of the choice of L, given x.
Explicit isomorphism for norms along equivalent divisors
Let g be in Z ≥1 , let S be a scheme, and p : C → S be a proper smooth curve, with geometrically connected fibres of genus g, with a section b ∈ C(S). Let D 1 , D 2 be effective relative Cartier divisors of degree d on C, that we also view as elements of C (d) (S). Recall from Proposition 6.3.2 the morphism Σ : C (d) → J. Then Σ(D 1 ) = Σ(D 2 ) if and only if D 1 , D 2 are linearly equivalent in the following sense: locally on S, there exists an . In this case, we define div(f ) = D 2 − D 1 . Proposition 6.3.2 gives us, for each invertible O-module L of degree 0 on C rigidified at b (viewed as an element of J(S)) specific isomorphisms
Now we describe explicitly this isomorphism Norm
and then we prove that this isomorphism is the one in (6.4.1).
We construct ϕ L,D 1 ,D 2 locally on S and the functoriality of the construction takes care of making it global. So, suppose that f is as above: . Let n ∈ Z with n > 2g − 2 + 2d. Then p * (L(nb)) → p * L(nb)| D 1 +D 2 and p * (O C (nb)) → p * O C (nb)| D 1 +D 2 are surjective, and (still localising on S) p * (L(nb)) and p * (O C (nb)) are free O S -modules and L(nb)| D 1 +D 2 and O C (nb)| D 1 +D 2 are free O D 1 +D 2 -modules of rank 1. Then we have l 0 in (L(nb))(C) and l 1 in (O C (nb))(C) restricting to generators on D 1 + D 2 . Let D − := div(l 1 ) and D + := div(l 0 ), and let V := C \ (D + + D − ). Note that V contains D 1 + D 2 and that U contains D + + D − . Then, on V , l := l 0 /l 1 is in L(V ), generates L| D 1 +D 2 , and multiplication by l is an isomorphism ·l : and let ϕ L,l,f be the isomorphism, given in terms of generators Now suppose that we made other choices n ′ , l ′ 0 , l ′ 1 . Then we get D − ′ , D + ′ , V ′ , l ′ and ϕ L,l ′ ,f .
Then there is a unique function g ∈ O C (V ∩ V ′ ) × such that l ′ = gl in L(V ∩ V ′ ). Then where, in the last step, we used Weil reciprocity, in a generality for which we do not know a reference. The truth in this generality is clear from the classical case by reduction to the universal case, in which the base scheme is integral: take a suitable level structure on J, then consider the universal curve with this level structure, and the universal 4-tuple of effective divisors with the necessary conditions. We conclude that ϕ L,l, Then there is a unique u ∈ O S (S) × such that f ′ = u·f , and since L has degree 0 on C Hence ϕ L,l,f ′ = ϕ L,l,f . We define
Symmetry of the Norm for divisors on smooth curves
Let C → S be a proper and smooth curve with geometrically connected fibres. For D 1 , D 2 effective relative Cartier divisors on C we define an isomorphism that is functorial for cartesian diagrams (C ′ /S ′ , D ′ 1 , D ′ 2 ) → (C/S, D 1 , D 2 ). If suffices to define this isomorphism in the universal case, that is, over the scheme that parametrises all D 1 and D 2 . Let d 1 and d 2 be in Z ≥0 , and let U := C (d 1 ) × S C (d 2 ) , and let D 1 and D 2 be the universal divisors on C U . Then we have the invertible O U -modules Norm D 1 /U (O C (D 2 )) and Norm D 2 /U (O C (D 1 )). The image of D 1 ∩ D 2 in U is closed, let U 0 be its complement. Then, over U 0 , D 1 and D 2 are disjoint, and the restrictions of Norm D 1 /U (O C (D 2 )) and Norm D 2 /U (O C (D 1 )) are generated by Norm D 1 /U (1) and Norm D 2 /U (1), and there is a unique isomorphism (ϕ D 1 ,D 2 ) U 0 that sends Norm D 1 /U (1) to Norm D 2 /U (1).
We claim that this isomorphism extends to an isomorphism over U. To see it, we base change by U ′ → U, where U ′ = C d 1 × S C d 2 , then U ′ → U is finite, locally free of rank d 1 !·d 2 !. Then D 1 = P 1 +· · ·+P d 1 and D 2 = Q 1 +· · ·+Q d 2 with the P i and Q j in C(U ′ ). The complement of the inverse image U ′0 in U ′ of U 0 is the union of the pullbacks D i,j under pr i,j : U ′ → C × S C of the diagonal, that is, the locus where P i = Q j . Each D i,j is an effective relative Cartier divisor on U ′ , isomorphic as S-scheme to C d 1 +d 2 −1 , hence smooth over S. Now and, on U ′0 , The divisor on U ′ of the tensor-factor 1 at (i, j), both in Norm D 1 /U ′ (1) and in Norm D 2 /U ′ (1), is D i,j . Therefore, the isomorphism (ϕ D 1 ,D 2 ) U 0 extends, uniquely, to an isomorphism ϕ D 1 ,D 2 over U ′ , which descends uniquely to U. Our description of ϕ D 1 ,D 2 allows us to compute it in the trivial case where D 1 and D 2 are disjoint. One should be a bit careful in other cases. For example, when d 1 = d 2 = 1 and P = Q, we have P * O C (Q) = P * O C (P ) is the tangent space of C → S at P , and hence also at Q, but ϕ P,Q is multiplication by −1 on that tangent space. The reason for that is that the switch automorphism on C × S C induces −1 on the normal bundle of the diagonal.
. Moreover the isomorphisms ϕ D 1 ,D 2 , and consequently ψ D 1 ,D 2 , are compatible with addition of divisors, that is, under (6.3.10) and (6.3.8), for every triple D 1 , D 2 , D 3 of relative Cartier divisors on C we have Proof It is enough to prove it in the universal case, that is, when D 1 and D 2 are the universal divisors on C U , and there we know that there exists a u in O U (U) × = O S (S) × such that Since the symmetry in Proposition 6.3.2 is compatible with the rigidification at (0, 0) ∈ (J×J)(S) then ψ d 1 b,d 2 b is the identity on O U , as well as the right hand side of (6.5.5) when D i = d i b. Hence u = u(d 1 b, d 2 b) = 1, proving (6.5.5). Now we prove (6.5.6). As for (6.5.5), it is enough to prove it in the universal case and then we can reduce to the case where
Explicit residue disks and partial group laws
Let C be a smooth, proper, geometrically connected curve over Z/p 2 , with a b ∈ C(Z/p 2 ), let g be the genus, and let M be as in Proposition 6.3.
is given by (D, E) we parametrise M × (Z/p 2 ) α , under the assumption that there exists a nonspecial split reduced divisor of degree g on C Fp . Let b 1 , . . . , b g in C(Z/p 2 ) have distinct images b i in C(F p ) such that h 0 (C Fp , b 1 +· · ·+b g ) = 1, and let b g+1 , . . . , b 2g in C(Z/p 2 ) be such that the b g+i are distinct and h 0 (C Fp , b g+1 +· · ·+b 2g ) = 1. Then the maps (6.6.1) areétale respectively in the points (b 1 , . . . , b g ) ∈ C g (F p ) and (b g+1 , . . . , b 2g ) ∈ C g (F p ) and consequently give bijections such that p and x c generate the maximal ideal of O C,c . For each i = 1, . . . , 2g we choose x b i so that x b i (b i ) = 0. For each (Z/p 2 )-point c ∈ C(Z/p 2 ) with image c in C(F p ) and for each λ ∈ F p let c λ be the unique point in C(Z/p 2 ) c with x c (c λ ) = λp. Then the map λ → c λ is a bijection F p → C(Z/p 2 ) c hence the maps f 1 , f 2 induce bijections Hence M × (Z/p 2 ) D,E is the union of M × (D λ , E µ ) as λ and µ vary in F g p and by Proposition 6.3.2 and Remark 6.3.12 we have (6.6.4) For each i ∈ {1, . . . , g}, c ∈ C(Z/p 2 ) and λ ∈ F p we define . Then, for each c ∈ C(Z/p 2 ) and each λ ∈ F g p , (6.6.5) We write E ± = E 0,± + · · · + E g,± so that E 0,± is disjoint from {b 1 , . . . , b g }, and E i,± , restricted to C Fp , is supported on Then, for each λ in F g p , (6.6.6) ). By (6.6.4), (6.6.5) and (6.6.6) we see that, for λ and µ in F g p , (6.6.7) generates the free rank one Z/p 2 -module M(D λ , E µ ). The fibre M × (D, E) over (D, E) in (J × J)(F p ) is an F × p -torsor, containing s D,E (0, 0), hence in bijection with F × p by sending ξ in F × p to ξ·s D,E (0, 0). Using that (Z/p 2 ) × = F × p × (1 + pF p ), we conclude the following lemma.
6.6.8 Lemma With the assumptions and definitions from the start of Section 6.6, we have, for each ξ ∈ F × p , a parametrisation of the mod p 2 residue polydisk of M × at ξ·s D,E (0, 0) by the bijection Using this parametrization it easy to describe the two partial group laws on M × (Z/p 2 ) when one of the two points we are summing lies over (D, E) and the other lies over (D, 0) or (0, E).
To compute the group law in J(Z/p 2 ) we notice that for each c ∈ C(Z/p 2 ) such that x c (c) = 0 and for each λ, µ ∈ F p we have (6.6.9) x c x c − (λ+µ)p and since these rational functions generate O C (c λ −c+c µ −c) and O C (c λ+µ −c) in a neighborhood of c, we have the equality of relative Cartier divisors on C (6.6.10) Hence, under the definition for λ ∈ F g p of (6.6.11) D 0 we have, for all λ, µ ∈ F g p , that D λ + D 0 µ = D λ+µ and E λ + E 0 µ = E λ+µ . Definition 6.6.7, applied with (D, 0) and (0, E), with x 0,E = 1 and, for every c ∈ C(F p ), with x 0,c = 1, gives, for all λ, µ in F g p , the elements (6.6.12) With these definitions, we have the following lemma for the partial group laws of M.
We end this section with one more lemma.
Lemma
The parametrization in Lemma 6.6.8 is the inverse of a bijection given by parameters on M × analogously to (3.1).
Proof Let Q be the pullback of M by f 1 ×f 2 with f 1 and f 2 as in (6.6.1). Then the lift f 1 ×f 2 : Q × → M × isétale at any point β ∈ Q(F p ) lying over b = (b 1 , . . . , b 2g ) ∈ (C 2g )(F p ) and induces a bijection between Q × (Z/p 2 ) b and M × (Z/p 2 ) (D,E) . In particular we can interpret s D,E (λ, µ) as a section of Q(b 1,λ 1 , . . . b 2g,µg ) and we can interpret the parametrization in Lemma 6.6.8 as a parametrization of Q × (Z/p 2 ) ξs D,E (0,0) . It is then enough to prove that the parametrization in Lemma 6.6.8 is the inverse of a bijection given by parameters on Q × . It comes from the definition of c ν for c ∈ C(Z/p 2 ) and ν ∈ F p , that the maps λ i µ i : C 2g (Z/p 2 ) b → F p are given by parameters in O C 2g ,b divided by p. In order to see that also the coordinate τ : Q × (Z/p 2 ) ξs D,E (0,0) → F p is given by a parameter divided by p it is enough to prove that there is an open subset U ⊂ C 2g containing b and a section s trivializing Q| U such that s D,E (λ, µ) = s(b 1,λ 1 , . . . , b 2g,µg ). Remark 6.3.12 and (6.5.1) give that (6.6.16) where ∆ ⊂ C×C is the diagonal and π i is the i-th projection C g × C g → C. We can prove that there is an open subset U ⊂ C g ×C g containing b and a section s trivializing Q| U such that s D,E (λ, µ) = s(b 1,λ 1 , . . . , b 2g,µg ), by trivializing each factor of the above tensor product in a neighborhood of b. Let us see it, for example, for the pieces of the form (π i , π g+j ) * O C×C (∆). Let π 1 , π 2 be the two projections C × C → C and let us consider the divisor ∆: for each pair of points c 1 , c 2 ∈ C(F p ) the invertible O-module O C×C (−∆) is generated by the section x ∆,c 1 ,c 2 := 1 in a neighborhood of (c 1 , c 2 ) if c 1 = c 2 , while it is generated by the section x ∆,c 1 , which is a factor in (6.6.7). This gives a section s i,j trivializing (π i , π g+j ) * O C×C (∆) in a neighborhood of b. With similar choices we can find sections trivializing the other factors in (6.6.16) in a neighborhood of b and tensoring all such sections we get a section s such that s D,E (λ, µ) = s(b 1,λ 1 , . . . , b 2g,µg ).
Extension of the Poincaré biextension over Néron models
Let C over Z be a curve as in Section 2. Let q be a prime number that divides n. We also write C for C Zq . Let J be the Néron model over Z q of Pic 0 C/Qq , and J 0 its fibre-wise connected component of 0. On (J × Zq J) Qq we have M as in Proposition 6.3.2, rigidified at 0 × J Qq and at J Qq × 0. Let us now prove that the G m -torsor M × on J × Zq J 0 has a unique biextension structure, extending that of M × . Over J × Zq J × Zq J 0 we have the invertible O-modules whose fibres, at a point (x, y, z) (with values in some Z q -scheme) are M(x + y, z) and M(x, z) ⊗ M(y, z). The biextension structure of M × gives an isomorphism between the restrictions of these over Q q , that differs from an isomorphism over Z q by a divisor with support over F q . The compatibility with the rigidification of M over J × Zq 0 proves that this divisor is zero. The other partial group law, and the required properties of them follow in the same way. We have now shown that M × extends the biextension M × .
Explicit description of the extended Poincaré bundle
Let C over Z be a curve as in Section 2. Let q be a prime number that divides n. We also write C for C Zq . By [22], Corollary 9.1.24, C is cohomologically flat over Z q , which means that for all Z q -algebras A, O(C A ) = A. Another reference for this is [28], (6.1.4), (6.1.6) and (7.2.1).
The relative Picard functor Pic C/Zq sends a Z q -scheme T to the set of isomorphism classes of (L, rig) with L an invertible O-module on C T and rig a rigidification at b. By cohomological flatness, such objects are rigid. But if the action of Gal(F q /F q ) on the set of irreducible components of C Fq is non-trivial, then Pic C/Zq is not representable by a Z q -scheme, only by an algebraic space over Z q (see [28], Proposition 5.5). Therefore, to not be annoyed by such inconveniences, we pass to S := Spec(Z unr q ), the maximal unramified extension of Z q . Then Pic C/S is represented by a smooth S-scheme, and on C × S Pic C/S there is a universal pair (L univ , rig) ( [28], Proposition 5.5, and Section 8.0). We note that Pic C/S → S is separated if and only if C Fq is irreducible.
Let Pic
[0] C/S be the open part of Pic C/S where L univ is of total degree zero on the fibres of C → S. It contains the open part Pic 0 C/S where L univ has degree zero on all irreducible components of C F q .
Let E be the closure of the 0-section of Pic C/S , as in [28]. It is contained in Pic C/S . By [28], Proposition 5.2, E is represented by an S-group scheme,étale.
By [28], Theorem 8.1.4, or [9], Theorem 9.5.4, the tautological morphism Pic [0] C/S → J is surjective (for theétale topology) and its kernel is E, and so J = Pic C/S → J induces an isomorphism Pic 0 C/S → J 0 . Let C i , i ∈ I, be the irreducible components of C F q . Then, as divisors on C, we have For L an invertible O-module on C F q , its multidegree is defined as and its total degree is then The multidegree induces a surjective morphism of groups (6.8.4) mdeg : Pic C/S (S) → Z I . Now let d ∈ Z I be a sufficiently large multidegree so that every invertible O-module L on C Fq with mdeg(L) = d satisfies H 1 (C F q , L) = 0 and has a global section whose divisor is finite. Let L 0 be an invertible O-module on C, rigidified at b, with mdeg(L 0 ) = d. Then over C × S J 0 we have the invertible O-module L univ ⊗ L 0 , and its pushforward E to J 0 . Then E is a locally free O-module on J 0 . Let E be the geometric vector bundle over J 0 corresponding to E. Then over E, E has its universal section. Let U ⊂ E be the open subscheme where the divisor of this universal section is finite over J 0 . The J 0 -group scheme G m acts freely on U. We define V := U/G m . As the G m -action preserves the invertible O-module and its rigidification, the morphism U → J 0 factors through U → V and gives a morphism Σ L 0 : V → J 0 . Then on C × S V we have the universal effective relative Cartier divisor D univ on C × S V → V of multidegree d, and L univ ⊗ L 0 together with its rigidification at b is (uniquely) isomorphic to Then Σ L 0 sends, for T an S-scheme, a T -point D on with its rigidification at b. Let s 0 be in L 0 (C) such that its divisor D 0 is finite over S, and let v 0 ∈ V (S) be the corresponding point.
On Pic
[0] C/S × S V × S C we have the universal L univ from Pic [0] C/S with rigidification at b, and the universal divisor D univ . Then on Pic C/S × S v 0 : (6.8.6) N q,d : Pic
Any global regular function on the integral scheme Pic
[0] C/S × S V is constant on the generic fibre, hence in Q unr q , and restricting it to (0, v 0 ) shows that it is in Z unr q , and if it is 1 on Pic C/S × S v 0 , it is equal to 1. Therefore trivialisations on Pic C/S × S V . The next proposition generalises [25], Corollary 2.8.6 and Lemma 2.7.11.2: there, C → S is nodal (but not necessarily regular), and the restriction of M to J 0 × S J 0 is described.
Proposition
In the situation of Section 6.8, the pullback of the invertible O-module M on J × Z unr q J 0 to Pic In a diagram: For T any Z unr q -scheme, for x in J(T ) given by an invertible O-module L on C T rigidified at b, and y in J 0 (T ) = Pic 0 C/Z unr q (T ) given by the difference D = D + − D − of effective relative Cartier divisors on C T of the same multidegree, we have Proof The scheme Pic q , hence regular, it is connected, hence integral, and since V Fq is irreducible, the irreducible components of (Pic which, by the way, equals the kernel of Z I → Z, x → j∈I m j x j .
We prove the first claim. Both N q,d and the pullback of M are rigidified on Pic [0] C/Z unr q × v 0 . Below we will give, after inverting q, an isomorphism α from N q,d to the pullback of M that is compatible with the rigidifications. Then there is a unique divisor D α on Pic
and let x be in Pic
[0] C/Z unr q (Z unr q ) specialising to an F q -point of P i , then restricting α to (x i , v 0 ) and using the compatibility of α (over Q unr q ) with the rigidifications, gives that the multiplicity of P i × V Fq in D α is zero. Hence D α is zero.
Let us now give, over (Pic C/Z unr q ) Q unr q = J Q unr q , and that V Q unr q = C (|d|) Q unr q , where |d| = i m i d i is the total degree given by the multidgree d. For T a Q unr q -scheme, x ∈ J(T ) given by L an invertible O C T -module rigidified at b, and v ∈ V (T ) given by a relative Cartier divisor D of degree |d| on C T , we have, using Proposition 6.3.2 and (6.8.6), the following isomorphisms (functorial in T ), respecting the rigidifications at v = v 0 : This finishes the proof of the first claim of the Proposition. The second claim follows directly from the definition of N q,d , plus the compatibility at the end of Proposition 6.3.2.
Integral points of the extended Poincaré torsor
Let C over Z be a curve as in Section 2. Given a point (x, y) ∈ (J × J 0 )(Z) we want to describe explicitly the free Z-module M(x, y) when x is given by an invertible O-module L of total degree 0 on C rigidified at b and y is given as a relative Cartier divisor D on C of total degree 0 with the property that there exists a unique divisor V whose support is disjoint from b and contained in the bad fibres of C → Spec(Z) such that O(D+V ) has degree zero when restricted We write V = q|n V q where V q is a divisor supported on C Fq . For every prime q dividing n let C i,q , i ∈ I q the irreducible components of C Fq with multiplicity m i,q and let V i,q be the integers so that V q = i∈Iq V i,q C i,q .
Proposition
The integers in (6.9.2) are given by Proof For every q dividing n let H q be an effective relative Cartier divisor on C Zq whose complement U q is affine (recall that C is projective over Z, take a high degree embedding and a hyperplane section that avoids chosen closed points c i,q on the C i,q ). The Chinese remainder theorem, applied to the O C (U q )-module (O C (D + V ))(U q ) and the (distinct) closed points c i,q , provides an element f q of (O C (D + V ))(U q ) that generates O C (D + V ) at all c i,q . Let D q = D + q − D − q be the divisor of f q as rational section of O C (D + V ). Then D + q and D − q are finite over Z q , and f q is a rational function on C Zq with This linear equivalence, restricted to Q q , gives, via the definition in (6.4.7), the isomorphism (6.9.5) ϕ : Tensoring with Norm (D − +D − q )/Qq (L) −1 we obtain the isomorphism (6.9.6) ϕ ⊗ id : using the identifications (6.9.7) Using the same method as for getting the rational section f q of O C (D + V ), we get a rational section l of L with the support of div(l) finite over Z q and disjoint from the supports of D and D q , and from the intersections of different C i,q and C j,q . By Proposition 6.8.7, and the choice of l, and (6.9.9) By (6.4.4), we have that ϕ ⊗ id maps Comparing with (6.9.2), we conclude that (6.9.11) e q = v q (f q (div(l))) .
We write div(l) = j n j D j as a sum of prime divisors. These D j are finite over Z q , disjoint from the support of the horizontal part of div(f q ), that is of D q − D, and each of them meets only one of the C i,q , say C s(j),q . Then, for each j, f m s(j),q q and q −V s(j),q have the same multiplicity along C s(j),q , and consequently they differ multiplicatively by a unit on a neighborhood of D j . Then we have We get (6.9.13)
Description of the map from the curve to the torsor
The situation is as in Section 2. The aim of this section is to give descriptions of all morphisms in the diagram (2.12), in terms of invertible O-modules on (C × C) Q and extensions of them over C × U, to be used for doing computations when applying Theorem 4.12. The main point is that each tr c i • f i is described in (7.4) as a morphism (of schemes) α L i : J Q → J Q with L i an invertible O-module on C × U, and that Proposition 7.8 describes ( j b ) i : C Z[1/n] → T i . For finding the line required line bundles, see [12]. We describe the morphism j b : U → T in terms of invertible O-modules on C ×C sm . Since T is the product, over J, of the G m -torsors T i := (id, m· • tr c i • f i ) * P × this amounts to describing, for each i, the morphism ( j b ) i : U → T i . Note that tr c i • f i : J Q → J Q is a morphism of groupschemes composed with a translation, and that all morphisms of schemes α : J Q → J Q are of this form. From now on we fix one such i and omit it from our notation.
Let α : J Q → J Q be a morphism of schemes, let L α be the pullback of M (see (6.3.3)) to , and let T α := (id, α) * M × on J Q : Then (b, id) * L α = O C Q , L α is of degree zero on the fibres of pr 2 : (C × C) Q → C Q , and: j * b T α is trivial if and only if diag * L α is trivial. Note that diagram (7.1) without the G m -torsors is commutative.
Conversely, let L be an invertible O-module on (C × C) Q , rigidified on {b} × C Q , and of degree 0 on the fibres of pr 2 : (C × C) Q → C Q . The universal property of L univ gives a unique β L : C Q → J Q such that (id × β L ) * L univ = L (compatible with rigidification at b). The Albanese property of j b : C Q → J Q then gives that β L extends to a unique α L : J Q → J Q such that α L • j b = β L . Then j * b T α L is trivial if and only if diag * L is trivial. We have proved the following proposition.
Proposition
In the situation of Section 2, the above maps α → L α and L → α L are inverse maps between the sets correspond to 1. Then m· • α L extends over Z to m· • α L : J → J 0 , and the restriction of j * b (m· • α L ) * M on C sm to U is trivial, giving a lift j b , unique up to sign: The invertible O-module L on (C × C) Q with its rigidification of (b, id) * L, extends uniquely to an invertible O-module on (C × C) Z[1/n] , still denoted L.
Proposition
Let S be a Z[1/n]-scheme, let d and e be in Z ≥0 , and let D ∈ C (d) (S) and E ∈ C (e) (S). Then we have: .
Proof We may and do assume (finite locally free base change on S) that we have x i and y j in C(S), such that D = i x i and E = j y j . Recall that, for c ∈ C(S), β L (c) in J(S) is (id, c) * L on C S , with its rigidification at b. Then we have: from which the desired equality follows. Now we prove the second claim. Let x be in C(S). The first equality holds by definition. Taking D = E = x in what we just proved, gives the second equality, and the third comes from the rigidification at b. Now let L be any extension of L with its rigidification of (b, id) * L from (C × C) Z[1/n] to C × U. For q dividing n, let W q be the valuation along U Fq of the rational section ℓ of diag * L on U. Then ℓ, multiplied by the product, over the primes q dividing n, of q −Wq , generates diag * L on U: There is a unique divisor V on C × U with support disjoint from (b, id)U and contained in the (C × U) Fq with q dividing n, such that has multidegree 0 on the fibres of pr 2 : C × U → U. Then L m is the pullback of L univ via id , because on C sm × J 0 the restriction of L univ and (j b × id) * M are equal (both are rigidified after (b, id) * and equal over Z[1/n]; here we use that, for all q|n, J 0 Fq is geometrically connected). Hence, on U we have j * b T m·•α L = diag * (L ⊗m (V ) × ), compatible with rigidifications at b ∈ U(Z[1/n]). Our trivialisation j b on U of T m·•α L is therefore a generating section of L ⊗m , multiplied by the product over the q dividing n, of the factors q −Vq , where V q is the multiplicity in V of the prime divisor (U × U) Fq . This means that we have proved the following proposition.
7.8
Proposition For x and S as in Proposition 7.5, we have the following description of j b : 8 An example with genus 2, rank 2, and 14 points The example that we are going to treat is the quotient of the modular curve X 0 (129) by the action of the group of order 4 generated by the Atkin-Lehner involutions w 3 and w 43 . An equation for this quotient is given in the table in [18], and Magma has shown that that equation and the equations below give isomorphic curves over Q.
Let C 0 be the curve over Z obtained from the following closed subschemes of A 2 by glueing the open subset of V 1 where x is invertible with the open subset of V 2 where z is invertible using the identifications z = 1/x, w = y/x 3 . The scheme C 0 can be also described as a subscheme of the line bundle L 3 associated to the invertible O-module O P 1 Z (3) on P 1 Z with homogeneous coordinates X, Z: the map O P 1 Z (3) → O P 1 Z (6) sending a section Y to Y ⊗Y +Z 3 ⊗Y induces a map ϕ from L 3 to the line bundle L 6 associated to O(6); then C 0 is isomorphic to the inverse image by ϕ of the section s := X 6 −3X 5 Z+X 4 Z 2 +3X 3 Z 3 −X 2 Z 4 −XZ 5 of L 6 and since the map ϕ is finite of degree 2 then C 0 is finite of degree 2 over P 1 Z . Hence C 0 is proper over Z and it is moreover smooth over Z[1/n] with n = 3 · 43. The generic fiber of C 0 is a curve of genus g = 2, labeled 5547.b.16641.1 on www.lmfdb.org. The only point where C 0 is not regular is the point P 0 = (3, x − 2, y − 1) contained in V 1 and the blow up C of C 0 in P 0 is regular.
In the rest of this section we apply our geometric method to the curve C and we prove that C(Z) contains exactly 14 elements. We use the same notation as in Sections 2 and 4.
The fiber C F 43 is absolutely irreducible while C F 3 is the union of two geometrically irreducible curves, a curve of genus 0 that lies above the point P 0 and that we call K 0 , and a curve of genus 1 that we call K 1 . We define U 0 := C \ K 1 and U 1 := C \ K 0 so that C(Z) = C sm (Z) = U 0 (Z) ∪ U 1 (Z) and both U 0 and U 1 satisfy the hypothesis on U in Section 2.
We have K 0 · K 1 = 2 and consequently the self-intersections of K 0 and K 1 are both equal to −2. We deduce that all the fibers of J over Z are connected except for J F 3 which has group of connected components equal to Z/2Z. Hence, The automorphism group of C is isomorphic to (Z/2Z) 2 , generated by the automorphisms ι and η lifting the extension to C 0 of ι, η : The quotients E 1 := C Q /η and E 2 := C Q /(ι • η) are curves of genus 1 and the two projections C → E i induce an isogeny J → Pic 0 (E 1 ) × Pic 0 (E 2 ). The elliptic curves Pic 0 (E i ) are not isogenous and ρ = 2.
The torsor on the jacobian
Let ∞, ∞ − ∈ C(Z) be the lifts of (0, 1), (0, −1) ∈ V 2 (Z) ⊂ C 0 (Z) and let us fix the base point b = ∞ in C(Z). Following Section 7 we describe a G m -torsor T → J and maps j b,i : U i → T using invertible O-modules on C × C sm . The torsor T = (id, m· • α) * M × only depends on the scheme morphism α : J Q → J Q , which, by Proposition 7.2, is uniquely determined by an invertible O-module L on (C × C) Q , rigidified on {b} × C Q , of degree 0 on the fibres of pr 2 : (C × C) Q → C Q , and such that diag * L is trivial. We now look for a non-trivial O-module L with these properties using the homomorphism η * : J Q → J Q , which does not belong to Z ⊂ End(J Q ). We can take α of the form tr c • (n 1 ·η * + n 2 ·id), where id : J Q → J Q is the identity map, n i are integers and c lies in J(Q). Using the map α → L α : where D is a divisor on C Q representing c, the maps pr i are the projections C Q × C Q → C Q and Γ η is the graph of the map η : C → C. Hence, we can take L of the form O C Q ×C Q (n 1 Γ η,Q + n 2 diag(C Q ) + pr * 1 D 1 + pr * 2 D 2 ) for some integers n i and some divisors D i on C Q . Among the O-modules of this form satisfying the needed properties, we choose When restricted to the diagonal L is trivial since, compatibly with the trivialisation at (b, b), In particular, the global section l := 1 of O C Q gives a rigidification of diag * L that we write as Following Proposition 7.8 and the discussion preceding it, we choose the extension of L over Using Proposition 7.8 we now compute j b,0 and j b,1 . Since l generates diag * (L) on the whole C sm , we have W 3 = W 43 = 0. The invertible O-module L ⊗m has multidegree 0 over all the fibers C × U 1 → U 1 , hence in order to compute j b,1 we must take V = 0 in (7.7), giving V 3 = V 43 = 0.
Hence for S and x as in (8.2.1), assuming moreover that 2 is invertible on S, where the last equality in (8.2.2) makes sense if the image of x is disjoint from ∞, ∞ − in C S . The restriction L ⊗m to C × U 0 has multidegree 0 over all the fibers C × U 0 → U 0 of characteristic not 3, while if we consider a fiber of characteristic 3 it has degree 2 over K 0 and degree −2 over K 1 . Hence for computing j b,0 we take V = K 0 × (K 0 ∩ U 0 ) in (7.7) giving V 43 = 0, V 3 = 1. Hence for S and x as in (8.2.1), assuming moreover that 2 is invertible on S, where the last equality in (8.2.3) makes sense if the image of x is disjoint from ∞, ∞ − in C S .
Some integral points on the biextension
On C 0 we have the following integral points that lift uniquely to elements of C(Z) Computations in Magma confirm that J(Z) is a free Z-module of rank r = 2 generated by The points in T (Z) are a subset of points of M × (Z) that can be constructed, using the two group laws, from the points in M × (G i , m·f (G j ))(Z) and M × (G i , m·D 0 )(Z) for i, j ∈ {1, 2}. Let us compute in detail M × (G 1 , m · f (G 1 ))(Z). As explained in Proposition 6.9.3, we have where, given a scheme S, an invertible O-module L on C S and a divisor D Since C F 43 is irreducible then 2f (G 1 ) has already multidegree 0 over 43, hence e 43 = 0. If we look at C F 3 then 2f (G 1 ) does not have multidegree 0, while 2f (G 1 ) + K 0 has multidegree 0; hence, by Proposition 6.9.3, Notice that over Z[ 1 2 ] the divisor G 1 is disjoint from β and δ (to see that it is disjoint from δ = (−1, −2, 1) over the prime 3 one needs to look at local equations of the blow up) thus β * O C (γ − α) and δ * O C (γ − α) are generated by β * 1 and δ * 1 over Z[ 1 2 ]. Thus there are integers e β , e δ such that β * O C (γ −α) and δ * O C (γ −α) are generated by β * 2 e β and δ * 2 e δ over Z. Looking at the intersections between β, γ, α and δ we compute that e β = −1 and e δ = 1, hence With analogous computations we see that
Some residue disks of the biextension
Let p be a prime of good reduction for C. Given the divisors we use Lemma 6.6.8 to give parameters on the residue disks in M × (Z/p 2 ) D,E and T (Z/p 2 ) D , with D, E the images of D, E in Div(C Fp ). We choose the "base points x β = x and x D,β = x D,∞ − = 1, x D,∞ = z −1 . For Q in {∞, β, α} and a ∈ F p let Q a be the unique Z/p 2 -point of C that is congruent to Q modulo p and such that x Q (Q a ) = ap ∈ Z/p 2 . We have the bijections 8.5 Geometry mod p 2 of integral points From now on p = 5. Let α ∈ C(Z/p 2 ) be the image of α ∈ C(Z). In this subsection we compute the composition κ : Z 2 → T (Z/p 2 ) j b,1 (α) of the map κ Z : Z 2 → T (Z p ) j b,1 (α) in (4.9) and the reduction map T (Z p ) j b,1 (α) → T (Z/p 2 ) j b,1 (α) . With a suitable choice of parameters in O T, j b,1 (α) , the map κ Z is described by integral convergent power series κ 1 , κ 2 , κ 3 ∈ Z p z 1 , z 2 and κ, composed with the inverse of the parametrization (8.4.1), is given by the images The divisor j b (α) is equal to the image of G t := e 0,1 G 1 + e 0,2 G 2 with e 0,1 := 6 , e 0,2 := 3 in J(F p ) and is a lift of j b,1 (α). The kernel of J(Z) → J(F p ) is a free Z-module generated by G 1 := e 1,1 G 1 + e 1,2 G 2 , G 2 := e 2,1 G 1 + e 2,2 G 2 , with e 1,1 := 16 , e 1,2 := 2 , e 2,1 := 0 , e 2,2 := 5 .
. We now show these computations in the cases of G t andt. The Riemann-Roch space relative described in Equation (6.4.1) sends
8.6
The rational points with a specific image mod 5.
Determination of all rational points
Using that for any point Q in C(F p ) the condition Applying our method to ∞ we discover that U 1 (Z) ∞ contains at most 2 points and the same holds for U 1 (Z) ∞ − . Moreover the action of η, ι on C(Z) tells that U 1 (Z) ι(α) , U 1 (Z) η(α) and U 1 (Z) ηι(α) are sets containing exactly 2 elements. Hence contains at most 12 elements. Looking at the orbits of the action of η, ι on U 1 (Z) we see that #U 1 (Z) ≡ 2 (mod 4), hence #U 1 (Z) ≤ 10. Since U 1 (Z) contains ∞, ∞ − and all the images by η, ι of U 1 (Z) α we conclude that #U 1 (Z) = 10. Applying our method to the point γ we see that U 0 (Z) γ contains at most two points, one of them being γ. Moreover solving the equations κ * f i = 0 we see that if there is another point γ ′ in U 0 (Z) γ then there exist n 1 , n 2 ∈ Z such that Using the Mordell-Weil sieve (see [27]) we derive a contradiction: for all integers n 1 , n 2 , the image in J(F 7 ) of 39G 1 +17G 2 +5n 1 G 1 +5n 2 G 2 is not contained in j b (C(F 7 )). We deduce that Applying our method to to ε we see that U 0 (Z) ε contains at most 2 points corresponding to two different solutions to the equations κ * f i = 0. We can see that one of the two solutions does not lift to a point in U 0 (Z) ε in the same way we excluded the existence of γ ′ ∈ U 0 (Z) γ . Hence U 0 (Z) ε has cardinality at most 1. Using that for every Q ∈ C(F p ) and every automorphism ω of C we have #U 0 (Z) Q = #U 0 (Z) ω(Q) , we deduce that contains at most 6 points. Looking at the orbits of the action of η, ι on U 0 (Z) we see that #U 0 (Z) ≡ 0 (mod 4), hence #U 4 (Z) ≤ 4, and since U 0 (Z) contains the orbit of γ we conclude that #U 0 (Z) = 4. Finally #C(Z) = #U 0 (Z) + #U 1 (Z) = 4 + 10 = 14 .
The fundamental group π 1 (P × (C), 1) is also known as a Heisenberg group. Its action on D τ is given in [6, (4.5.3)]. Now recall the definition of T in (2.12). As M 2g,1 (Z) is the lattice of J(C), and M 1,2g (Z) the lattice of J ∨ (C), each f i is given by an antisymmetric matrix f i,Z in M 2g,2g (Z) such that for all y in M 2g,1 (Z) we have f i (y) = y t ·f i,Z , and by a complex matrix f i,C in M g,g (C) such that for all v in M g,1 (C), for each i we have f i (v) = v t ·f i,C in M 1,g (C). For more details about this description of the f i see the beginning of [6, §4.7]. Then we have with m·f (y) ∈ M ρ−1,2g (Z) with rows the m·y t ·f i,Z . So, π 1 (T (C)) is a central extension of M 2g,1 (Z) by M ρ−1,1 (Z), with commutator pairing sending (y, y ′ ) to (2my t ·f i,Z ·y ′ ) i . The universal covering T (C) is given by with m·(c + f (v)) ∈ M ρ−1,g (C) with rows the m·( c i + v t ·f i,C ) with c i a lift of c i in M 1,g (C). The action of π 1 (T (C), 1) on T (C) is given again, with the necessary changes, by [6, (4.5.3)]. Now that we know π 1 (T (C), 1) we investigate which quotient of π 1 (C(C), b) it is, via j b : C(C) → T (C). We consider the long exact sequence of homotopy groups induced by the C ×,ρ−1 -torsor T (C) → J(C), taking into account that C ×,ρ−1 is connected and that π 2 (J(C)) = 0: Again we see that π 1 (T (C), 1) is a central extension of the free abelian group π 1 (J(C), 0) by Z ρ−1 , and from the matrix description we know that the ith coordinate of the commutator pairing is given by mf Dually, this means that π 1 (T (C), 1) arises as the pushout (9.1.12) where the subscript (0, 0) means the largest quotient of type (0, 0), where the subscript Gal(Q/Q) means co-invariants modulo torsion, and where the left vertical map is m times the quotient map. We repeat that the morphism from π 1 (C(C)) = G to π 1 (T (C), 1) given by the middle vertical map is induced by j b : C(C) → T (C).
Finiteness of rational points
In this section we reprove Faltings's finiteness result [16] in the special case where r < g + ρ − 1. This was already done in [4], Lemma 3.2 (where the base field is either Q or imaginary quadratic). We begin by collecting some ingredients on good formal coordinates of the G mbiextension P ×,ρ−1 → J × J ∨,ρ−1 over Q, and on what C looks like in such coordinates.
Formal trivialisations
Let A, B and G be connected smooth commutative group schemes over a field k ⊃ Q, and let E → A × B be a commutative G-biextension. Let a be in A(k), b ∈ B(k) and e ∈ E(k). For n ∈ N, let A a,n be the nth infinitesimal neighborhood of a in A, hence its coordinate ring is O A,a /m n+1 a . We use similar notation for B with b, and E with e, and also for the points 0 of A, B and E, and, similarly, the formal completion of A at a is denoted by A a,∞ , etc. We also use such notation in a relative context, for example, for the group schemes E → B and E → A. We view completions as A a,∞ as set-valued functors on the category of local k-algebras with residue field k such that every element of the maximal ideal is nilpotent. For such a k-algebra R, A a,∞ (R) is the inverse image of a under A(R) → A(k). Then A 0,∞ is the formal group of A.
We now want to show that the formal G 0,∞ -biextension E 0,∞ → A 0,∞ × B 0,∞ is isomorphic to the trivial biextension (the object G 0,∞ × A 0,∞ × B 0,∞ with + 1 given by addition on the 1st and 2nd coordinate, and + 2 by addition on the 1st and 3rd coordinate). As exp for A 0,∞ gives a functorial isomorphism T A/k (0) ⊗ k G a 0,∞ k → A 0,∞ , and similarly for B and G, it suffices to prove this triviality for G 0,∞ a -biextensions of G 0,∞ a × G 0,∞ a over k. One easily checks that the group of automorphisms of the trivial G 0,∞ a -biextension of G 0,∞ a × G 0,∞ a over k that induce the identity on all three G 0,∞ a 's is (k, +), with c ∈ k acting as (g, a, b) → (g + cab, a, b). As this group is commutative, it then follows that the group of automorphisms of the G 0,∞ -biextension E 0,∞ → A 0,∞ × B 0,∞ that induce identity on G 0,∞ , A 0,∞ ,and B 0,∞ , is equal to the k-vector space of k-bilinear maps T A/k (0) × T B/k (0) → T G/k (0). This indicates how to trivialise E 0,∞ . We choose a sectionẽ of the G-torsor E → A × B over the closed subscheme A 0,1 × B 0,1 of A × B: withẽ(0, 0) = e in E(k).
There are unique such linear maps such that the adjustedẽ is compatible with the given trivialisations of E → A × B over A 0,1 × B 0,0 and over A 0,0 × B 0,1 . In geometric terms,ẽ, assumed to be adjusted, is then a splitting of T G (0) B ֒→ T E/B (0) ։ T A (0) B over B 0,1 that is compatible with the already given splitting over 0 ∈ B(k), and it is also a splitting of T G (0) A ֒→ T E/A (0) ։ T B (0) A over A 0,1 that is compatible with the already given splitting over 0 ∈ A(k). The splitting over B 0,1 gives an isomorphism from (T G (0) ⊕ T A (0)) B 0,1 to (T E/B ) B 0,1 . So the exponential map, for + 1 , for the pullback to B 0,1 of E → B, gives an isomorphism of formal groups over B 0,1 : Viewing E 0,∞ B 0,1 as the tangent space at the zero section of the pullback to A 0,∞ of E → A, this isomorphism gives a splitting of T G (0) A ֒→ T E/A (0) ։ T B (0) A over A 0,∞ . The exponential map for + 2 for the pulback to A 0,∞ of E → A then gives an isomorphism of formal groups over A 0,∞ : where E 0,∞ A 0,∞ /A 0,∞ denotes the completion along the zero section of the pullback via A 0,∞ → A of E → A. The compatibility between + 1 and + 2 on E ensures that this isomorphism is an isomorphism of biextensions, with the trivial biextension structure on the left. Now that we know what good formal coordinates at 0 in E(k) are, we look at the point e in E(k), over (a, b) in (A × B)(k). We produce an isomorphism E 0,∞ → E e,∞ , using the partial group laws. Let E b be the fibre over b of E → B. We choose a section The exponentials for the group laws of E b and A then give a section that we view as an A a,∞ -valued point of E b , and as a section of the group scheme E A a,∞ → A a,∞ , with group law + 2 . The translation byẽ ∞ 1 on this group scheme induces translation by b on B A a,∞ , and maps (a, 0), the 0 element of E a , to e. Hence it induces an isomorphism of formal schemes E (a,0),∞ → E e,∞ . In order to get an isomorphism E 0,∞ → E (a,0),∞ , we repeat the process above, but with the roles of A and B exchanged. We choose a section0 2 : {a} × B 0,1 → E a of E a → {a} × B. Then the exponential for + 2 gives us a section0 ∞ 2 : {a} × B 0,∞ → E a of E a → {a}×B. This0 ∞ 2 is a section of the group scheme E B 0,∞ → B 0,∞ , and the translation on it by0 ∞ 2 sends 0 in E(k) to (a, 0), hence gives an isomorphism of formal schemes E 0,∞ → E (a,0),∞ . Composition then gives us an isomorphism E 0,∞ → E e,∞ , and the good formal coordinates on E at 0 ∈ E(k) give what we call good formal coordinates at e. Similarly, we get a section 0 ∞ 1 of E A 0,∞ → A 0,∞ and a section e ∞ 2 of E B b,∞ → B b,∞ giving isomorphisms E 0,∞ → E (0,b),∞ and E (0,b),∞ → E e,∞ , hence by composition a 2nd isomorphism E 0,∞ → E e,∞ . These isomorphisms are equal for a unique choice of0 1 andẽ 2 (given the choices of0 2 andẽ 1 ).
In Section 9.2.3 we will use that these isomorphisms transport all additions that occur in (4.4) to additions in E 0,∞ and therefore to additions in the trivial formal biextension.
Zariski density of the curve in formally trivial coordinates
Let C be as in the beginning of Section 2. Let C(C) be the inverse image of C(C) under the universal cover T (C) → T (C). Then C(C) is connected since j b : C → T gives a surjection on complex fundamental groups. Now we consider the complex analytic variety T (C) as a complex algebraic variety via the bijection T (C) = C g+ρ−1 as given in (9.1.4). The analytic subset C(C) contains the orbit of 0 under π 1 (T (C), 1). This orbit surjects to the lattice of J(C) in M g,1 (C), and over each lattice point, its fibre in M ρ−1,1 (C) contains a translate of 2πiM ρ−1,1 (Z). Hence this orbit is Zariski dense in C g+ρ−1 . It follows that the formal completion of C(C) at any of its points is Zariski dense in C g+ρ−1 : if a polynomial function on C g+ρ−1 is zero on such a completion, then it vanishes on the connected component of C(C) of that point, hence on C(C) and consequently on T (C).
We express our conclusion in more algebraic terms: for c ∈ C(C), with images t ∈ T (C) and in P ×,ρ−1 (C), each polynomial in good formal coordinates at t of the biextension P ×,ρ−1 → J×J ∨ over C that vanishes on j b (C c,∞ C ), vanishes on T t,∞ C . This statement then also holds with C replaced by any subfield, or even any subring of the form Z (p) with p a prime number, or the localisation of Z (the integral closure of Z in C) at a maximal ideal.
The p-adic closure in good formal coordinates
We stay in the situation of Section 2, but we denote G := G ρ−1 m , A := J and B := J ∨,0ρ−1 , and E := P ×,ρ−1 . Let d G , d A , and d B be their dimensions: d G = ρ − 1, d A = g and d B = (ρ − 1)g.
Let p > 2 be a prime number. From Section 9.2.1 and Lemma 5.1.1 we conclude that we can choose formal parameters for E at 0, over Z (p) , such that they converge on the residue polydisk E(Z p ) 0 , and such that they induce the trivial biextension structure on Z d G p × Z d A p × Z d B p . We keep the notation of Section 9.2.1, for e in E(Z p ), lying over (a, b) in (A × B)(Z p ). This e plays the role that t has at the beginning of Section 4. As explained at the end of Section 9.2.1, we Let p > 2 be a prime number of good reduction for C. We consider the Poincaré torsor as l b and l. Choosing a section that trivializes L on an open subset of (C × C) Zp containing (b, b), (c, b), and (c, c) in (C × C)(F p ) we get a divisor D on (C × C) Zp whose support is disjoint from (c, b) and (c, c), and an isomorphism between L and O(D) on (C × C) Zp . After modifying D with a principal horizontal divisor and a principal vertical divisor D| C×{b} and diag * D are both equal to the the zero divisor on C Zp , hence l b and l are the extensions of elements of Q p , interpreted as rational sections of O(D) on (C × C) Zp . By Propositions 7.5 and 7.8, there exists a unique λ ∈ Q × p such that, for each d ∈ C(Z p ) c , Since x j is the j-th coordinate of log J and since z is the pullback of ψ, we deduce that It should now be easy to exactly interpret geometrically the cohomological approach, showing that in the coordinates used here, the equations for C(Q p ) 2 are precisely equations for the intersection of C(Q p ) and the p-adic closure of T (Z). For doing computations, one can do them in the geometric context of this article, or, as in [5], in terms of theétale fundamental group of C. The connection between these is then given by p-adic local systems on T .
Author contributions This project started with an idea of Edixhoven in December 2017. From then on Edixhoven and Lido worked together on the project. Section 8 is due entirely to Lido. Section 9 was written in July and August 2020. | 27,099.8 | 2019-10-23T00:00:00.000 | [
"Mathematics"
] |
A New Framework Combining Local-Region Division and Feature Selection for Micro-Expressions Recognition
Micro-expressions are deliberate or unconscious movements of people’s psychological activities, reflecting the transient facial true expressions. Previous works focus on the whole face for micro-expressions recognition. These methods can extract a number of feature vectors which are relevant or irrelevant to the micro-expressions recognition. Besides, the high-dimension feature vectors can result in longer computational time and increased computational complexity. In order to address these problems, we propose a new framework which combines the local-region division and the feature selection. Based on the proposed framework, the original images can retain more efficient regions and filter out the invalid components of feature vectors. Specifically, with the joint efforts of the facial deformation identification model and facial action coding system, the global region is divided into seven local regions with their corresponding actions units. The ReliefF algorithm is used to select effective components of feature vectors and reduce the dimension. To evaluate the proposed framework, we conduct experiments on both the Chinese Academy of Sciences Micro-expression II Database and Spontaneous Micro-expression Database with Leave-One-Subject-Out Cross Validation method. The results show that the performance in local combined regions outperforms its counterpart in the global region, and the recognition accuracy is further improved with the combination of feature selection.
I. INTRODUCTION
Language is not the only way of human communication. In contrast to verbal communication, nonverbal communication plays a very important role in interpersonal communication and nurse-patient relationship accounting for 65% of all forms of communication. There are several types of nonverbal communication that can convey a person's true sentiment, thought and personality such as facial expressions, attitudes, interpersonal styles [1], etc. Facial expressions are intuitive reflection of a person's state. Currently, facial expressions can be mainly classified into The associate editor coordinating the review of this manuscript and approving it for publication was Venkateshkumar M . macro-expressions and micro-expressions (MEs). In the early years, the automatic analysis of facial expressions focused on distinguishing the MEs and the macro-expressions [2], [3]. Macro-expressions usually last for about 2 to 5 seconds, and they are distributed throughout the face generally. However, MEs are brief, spontaneous facial movements and usually occur when a person tries to conceal the inner emotions [4], [5]. Studies showed that MEs only last one-twenty-fifth to one-third second with slight intensities of involved muscle movements [6]- [8]. Micro-expression recognition (MER) is considered as an important clue in the field of national security, medical care, psychological diagnosis, and investigative interrogation [9]- [13], etc., because it can detect the real emotions beneath the false surface. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Although Ekman and his team developed Micro Expression
Training Tool (METT) [14] to train people to distinguish the categories of synthetic MEs, it is not completely applicable to spontaneous expressions. Recently, the application of computer vision in MER has becoming very popular. Pfister et al. applied a temporal interpolation model together with the first comprehensive spontaneous micro-expression corpus to the field of MER, achieving the first success in recognizing spontaneous facial micro-expressions [15]. The proposed framework mainly consists of five subsections, namely, face alignment, motion magnification, Temporal Interpolation Model (TIM), feature extraction, and classification [16]. In addition, several previous researches have been proposed for MER. Shreve et al. put forward a new approach for the automatic temporal segmentation of facial expressions in long videos which can detect and distinguish between large expressions (macro) and localized micro-expressions in [3]. Huang et al. proposed Spatial-Temporal performed Local Quantization Patterns (STCLQP) [17], the expansion of Completed Local Quantized Pattern (CLQP) [18] to spatial-temporal space, realizing the progress in the analysis of facial MEs. STCLQP utilized three useful difference of pixels, including signbased, magnitude-based and orientation-based, to obtain the compact and distinctive codebook for micro-expression analysis. In [19], Wang et al. proposed an identification algorithm based on Discriminant Tensor Subspace Analysis (DTSA) and extreme learning machine, and extended DTSA to a high-order tensor to process micro-expression video fragments. Furthermore, Local Spatial-temporal Direction Features [20] was developed for analyzing robust principal component of micro-expressions, but it was not improved. Then, the same team proposed a novel color space model which can translate the fourth dimensional RGB containing color information into tensor independent color space for greater accuracy [21]. The aforementioned works have laid a good foundation for the latter studies on facial expression recognition.
However, to date, there are several parts which could be used to enhance the performance of micro-expression recognition. Firstly, MEs are not distributed uniformly across the whole face [22]- [25], and they occur in a combination of several facial regions. For example, the muscular movements of the cheek lift and mouth upturn show a happy emotion, while the raised eyebrows and slightly parted lips indicate that someone feels surprised; Secondly, when using different feature descriptors to extract the block features, the dimension of the feature vectors will increase exponentially with the increase of parameters. For the high-dimensional feature vector containing many components which are unrelated to the occurrence of MEs, it has a high computational cost when performing micro-expression classification. In this paper, we propose an extended framework based on the case proposed by Pfister et al. [15]. We divide the local regions after operating the TIM and performing feature extraction on the local regions. To address the computational complexity of the feature vector, the feature vector with high information content is identified by feature selection for classifying the MEs. Therefore, a new method for automatically recognizing MEs is proposed. The contributions to this article are as follows: • We propose a facial regions-of-interest (RoIs) location technique which depends on an automatic discriminant method, namely, the discriminative facial deformable model method [26], to divide the facial key-area blocks. The divided local-region blocks completely cover all Facial Action Coding System FACS) Action Units (AUs) corresponding to each micro-emotion category, and eliminate the areas which are irrelevant to MEs, effectively avoiding the influence of the irrelevant areas on MER.
• ReliefF algorithm is applied to carry out the feature selection with three spatial-temporal local texture descriptors. The combination of local regions and the feature selection [27] can adequately pick the feature subsets with good distinguishing characteristics. Feature selection can reduce the dimension of feature vectors and increase the speed of processing operation. At the same time, it can retain the feature components with high information content for the classification of MEs, which is the key to improve MER.
• Intensive experiments using three feature descriptors on the two aforementioned datasets show that for MER, the combination of local regions and feature selection performs better than the operations separately. Therefore, using a better combination of facial regions has a great influence on improving the recognition accuracy of MEs.
The remainder of the paper is organized as follows. Section II reviews the databases, the MER in global region and the spatial-temporal local texture descriptors which are used in the experiments. In Section III, we present our proposed framework for MER. Afterwards, we introduce the region division and feature selection in Sections IV and V. We design two comparisons for evaluating this framework in Section VI. Eventually, we draw the conclusion of our study in Section VII.
II. RELATED WORK A. DATABASE
So far, MEs datasets have been established by some teams, but most of the MEs samples in the datasets are artificially controlled and non-spontaneous. Studies have shown that when people produce natural MEs, they themselves even do not realize that they leak their true feelings involuntarily. What's more, there are essential differences between the posed and the authentic spontaneous microexpressions, which proved the defect of the posed MEs in practical applications. Therefore, in order to obtain more realistic and natural spontaneous data, the Spontaneous Micro-Expression (SMIC) database and the Chinese Academy of Science Micro-Expression II (CASME II) database [28], an upgraded version of the CASME, have been collected.
1) SMIC
The SMIC database is the world's first public spontaneous micro-expression database designed by a team in the University of Oulu in Finland in 2012. They carefully chose video clips to induce subjects' emotional responses. To simulate high stake situations, subjects were told that there would be a punishment if they are found any emotion in their faces.
The team collected the real emotions, including happy, sad, disgust, fear, and surprise. In SMIC database, they are divided into three main categories, positive (happy), negative (sad, disgust and fear), and surprise. The collection is done by cameras with different frame rates, respectively, a high speed (HS) camera of 100fps, a normal visual camera (VIS), and a near-infrared (NIR) with 25 fps. Similar experiments were conducted under VIS and NIR cameras for obtaining another two sessions of SMIC with the last eight subjects. Table 1 demonstrates the specific composition of the SMIC database.
2) CASME II
The CASME II database was established by the team of Fu Xiaolan in the Institute of psychology, Chinese academy of sciences. It was an extension of the CASME database, with higher temporal resolution (200 fps) and spatial resolution (about 280 × 340 pixels on facial area) in a well-controlled experimental environment. They selected 247 MEs from nearly 3000 sequences for the database with corresponding AUs and labeled emotions. Similarly, the label of CASME II database has an identical judgment-criteria with the CASME database. According to this, there are five micro-expression labels in the dataset, namely, happy, disgust, surprise, repression, and other. The specific number of sequence and pictures for each type of expressions is shown in the following Table 2.
B. SPATIAL-TEMPORAL LOCAL TEXTURE DESCRIPTORS
Dynamic texture analysis [29] in the field of spatial-temporal space can provide information about the dynamic process of facial appearance and movement, which all have an important impact on the recognition performance. Some descriptions about the Local Binary Pattern on Three Orthogonal Plans
1) LBP-TOP
In the field of facial expression recognition, texture features are commonly used to reflect the distribution properties of pixel space. The Local Binary Patterns (LBP) [30], [31], proposed by the University of Oulu in Finland, is a simple, efficient, and typical texture feature extraction method which can be applied in many fields. Subsequently, Zhao and Pietikäinen put forward the LBP-TOP [29], [32], which is an extension of LBP descriptor in three-dimensional space and a measurement of dynamic image texture. The LBP-TOP operator sets three spatial-temporal axes for the dynamic image sequence, named T, X, and Y. Specifically, the XY plane contains texture information for each frame of image, the XT and YT planes include the changes of the MEs sequence in the spatial position over time. First, we can obtain the LBP values on XY, XT, and YT by using the LBP-TOP operator. Then, we form the final histogram by connecting the local LBP histograms in three orthogonal planes (XY, XT, YT) in series. The concatenated histogram represents the feature vector of LBP-TOP, including the information of the appearance and motion of the dynamic texture. Fig. 1 shows the diagram of extracting the LBP-TOP feature vector.
2) HOG-TOP
Histograms of Oriented Gradients (HOG) was proposed by Dalal and Triggs [33] in 2005. The HOG descriptor obtains the feature vectors based on the specification of image edges. Even in the case of lacking accurate information of the image direction gradient and edge position, the HOG descriptor can characterize the local appearance and shape them by calculating the gradient of image and the amplitude of gradient in different directions [34].
Firstly, we calculate the gradient amplitude and directions in the horizontal and vertical direction of the image to capture the expressions' contour information. Provided the pixel (x, y), we can obtain the gradient components G x and G y by the convolution of the gradient operator K and the original image in the respective directions of x and y, where K = [−1, 0, 1] T . For each point of the image, its gradient amplitude G and gradient direction α of the pixel VOLUME 8, 2020 are computed as follows: Secondly, we need to divide the image into cells with the same size and count the histogram of oriented gradient of each cell. Each cell unit is combined into large, spatially connected blocks. Therefore, we can calculate the histogram features of the oriented gradient on the whole block and normalize the features to reduce the influence of background color and noise.
Finally, we can construct the HOG feature by concatenating the block-based gradient histograms. Moreover, the HOG-TOP [34] operator acts on the dynamic video sequences and its algorithmic process is the same as the HOG.
3) HIGO-TOP
HIGO-TOP is a simplified feature descriptor of the HOG-TOP. This operator ignores the size of the first derivative to reduce the effects of illumination and contrast. Similarly, the HIGO-TOP [17] operator can apply to the dynamic video sequences and its operation is the same as the HIGO.
C. RECOGNIZING MICRO-EXPRESSION IN GLOBAL REGION
When calculating the description over the entire facial expression sequence, the descriptors can only encode the appearance of micro-patterns and ignore their specific positions [35]. For solving the effect, Guoying Zhao and Matti Pietikäinen proposed a new facial representation in [29]. Taking the ordinary facial expression as an example, the regularized facial image can be divided into overlapping or non-overlapping blocks. Figs. 2 and 3 are static images in which blocks of area are separated. Thereinto, Fig. 2 describes the non-overlapping 7 × 5 blocks and the overlapping 4 × 3 blocks are illustrated in Fig. 3. Extracting the feature vectors of the spatial-temporal pattern on each region block and connecting them in series, we can obtain the feature vector of the dynamic video sequences from three different directions.
The method based on the global region can extract the motion features from the facial blocks. However, there are some differences between MEs and the conventional expressions. MEs often occur in some local areas, such as eyes, nose, and mouth [23], [24]. Furthermore, there are two shortcomings in extracting feature vectors from global regions. On one hand, there will be an effect on MER because of extracting the unrelated features in global region. On the other hand, the same organ is repeatedly divided owing to the irregular distribution of the main parts, resulting an incomplete extraction of motion features.
III. PROPOSED FRAMEWORK OF THE MEs RECOGNITION
In this paper, we propose a new framework for improving the accuracy of MER. A specific MER process of our method is shown in Fig. 4.
Firstly, according to a large number of observations, there will also be some changes in facial motions which are unrelated to the MEs. Therefore, dividing the original image into local areas of the face (i.e., eyes, mouth, and so on) and removing the unrelated facial regions are the key to study the internal structure information of the local module deeply. Then, for comparing the usability and superiority clearly, we perform the feature extraction and recognize the regions of interest (RoIs) based on three feature descriptors (i.e., LBP-TOP, HOG-TOP, and HIGO-TOP) in CASME II and SMIC databases.
Next, we use the ReliefF algorithm for feature selection. Feature selection is a process for selecting the most effective features. Therefore, the combination of feature extraction and feature selection can improve the model accuracy and reduce the running time.
There are three advantages of the proposed framework. Firstly, RoIs can exclude the unnecessary and retain the useful parts. Secondly, different combinations of the local regions are beneficial for finding a better information distribution combination. Thirdly, reducing the dimension of feature vectors can effectively shorten the running time and improve the recognition accuracy. 94502 VOLUME 8, 2020
IV. REGION DIVISION
The global method of MEs treats every divided region uniformly, while, the presence of unrelated local regions will reduce the accuracy of recognition. Hence, we need to seek out the involved facial movement regions when MEs occur and use them as the division standard. Extracting the feature vectors of spatial-temporal pattern can eliminate the redundant facial information effectively.
A. FACIAL ACTION UNITS
In our daily life, our facial expressions are varied whether it is a regular expression or a voluntary micro-expression. However, there will be something in common when the same type of expressions appear. American psychologists Ekman and Friesen developed and revised the FACS in [36]- [38] for identifying facial expressions objectively. This system assigns the muscle actions to the face, of which thirty-two are named AUs and another fourteen are Action Descriptors (ADs) [39]. Each AU can describe the specific movement of a single local muscle of the face. Facial expressions are often composed by the synergy of several specific AUs.
The excitement of the muscles can arouse many AUs, which make up different emotions. In addition, an AU can also exist in several different expressions. For example, a person may be thinking when the facial muscles drive the AU4 movement. Likewise, AU4 also appears when someone feels anger, anxiety, pain or ache. Fig. 5 exemplifies some specific AUs. For example, AU1 indicates the inner eyebrows raiser and AU5 shows the rising upper eyelids. Fig. 6 presents the detailed decomposition of AUs with the example of ''Happiness''. We can see that a facial expression can activate one or several AUs. For instance, there are two situations when generating a surprised expression. The first one is that people realized their surprise and concealed this emotion subconsciously when the AU5 fleeting on the face. The second is that people pose a surprised expression deliberately. In this situation, AU5 is mostly appearing with the combination of AUs (1,2,5,25,26) or AU27. Moreover, AUs (4, 7) indicate the confused or concentrated emotion state, while AUs (5, 7) represent slightly fear.
B. CORRESPONDENCE BETWEEN MEs AND AUs
Researchers have done a lot of studies on common expressions. Table 3 shows several conventional expressions with their corresponding AUs. However, the MEs are short and slight muscle movements in a special high-stake situation. It is different with the conventional expression in the relationship with AUs. For example, when a ''surprise'' expression appears on the face, it corresponds to AUs (1,2,5,26) in the conventional expression. But it corresponds to AUs (1, 2), AU25 or AU2 in MEs. Therefore, the correspondence between the MEs and the related AUs have been summed up from the CASMEII database by the Fu Xiaolan group of the Chinese Academy of Sciences Psychology in Table 4.
C. BLOCK-BASED FACIAL DIVISION
In Fig. 7, we use the facial deformation identification model method [28], which is different from the classical Active Appearance Model (AAM) [40], to automatically identify the 49 key points of the face. This algorithm can realize the incremental learning to the facial model for a higher accuracy.
According to the division method of the facial region in [41], the FACS and a method of automatically locating the landmark points are used to divide the RoIs of the face. Fig. 8 demonstrates the distribution of the divided local blocks (i.e., ''eye'', ''nose'', ''mouth'', ''cheek'', and ''chin''). Studying the MEs on the divided seven local areas can further eliminate the redundant facial information and retain more meaningful facial details. The coordinate points corresponding to the boundaries of the seven region blocks demonstrated in Table 5 and Table 6 explain the corresponding AUs in each local area. Comparing Table 4 and Table 6, the local-region blocks divided in this paper mostly cover all AUs, avoiding the effect of other irrelevant AUs.
V. FEATURE SELECTION A. DIMENSION STRAIN
The feature dimension is increased due to the precise parameter setting after the feature extraction, which can easily lead to a ''dimensional disaster''. The excessive feature vector dimension not only increases the computational complexity and the running time, it also reduces the classification accuracy. So, it is useful to reduce the dimension of the extracted feature vector for having a great efficient feature subset. At the same time, the selected features are more discriminative to understanding the model and analyzing the data. Therefore, feature selection [42] is well applied to select the most effective features for reducing the dimension of the feature space. We can see that some proposed feature-selection methods are shown in [43]- [46].
B. RELIEFF ALGORITHM
The Relief, a supervised feature selection algorithm, was first proposed by Kira and Rendell in [47]. It judges the correlation between features and categories based on the feature distinction ability of the nearest neighbor samples, and assigns the weight value for each feature. When the weight value is greater than the predetermined threshold, the feature is retained; otherwise, it will be rejected. ReliefF algorithm is an improved extension method based on the Relief algorithm by Kononeko [48]. This algorithm can overcome the problem of limitation on two-classifications. In addition, it can also solve both the problem of incomplete data and regression problem. ReliefF algorithm makes a great contribution to the multi-label learning when comparing with the most of the multi-label feature selection methods [49].
Similar to the Relief algorithm, we input the training set X = (x i , x i , · · · , x N ) and x i ∈ R m . We set m as the number of features for each sample, the number of nearest neighbor samples is K , the number of iterations is n, and the feature weight threshold is δ. First, we need to initially set the preselected feature subset S to an empty set, and denote the weight values of all features as w(A f ) = 0, where A f (f = 1, 2, · · · , m) represents the f th feature of the sample.
Second, the scope of the iterative operation is selected from 1 to n. ReliefF randomly obtains a sample R from the training sample set. Unlike the Relief algorithm, the ReliefF can find K of the nearest neighbors from the same class, named nearest hits H j (j = 1, 2, · · · , K ). Also, the K of nearest neighbors from the different classes with sample R is named near misses M j (C)(j = 1, 2, · · · , K ), where C is a different category from R [46]. Then, the update calculation for the weight values for each feature is shown as follows: Note that Class(R) represents the category of sample R, and P(C) expresses the ratio of the number of samples of the category C to the total number of samples. Furthermore, the function diff (A f , R, B) for calculating the discrimination of sample R and B in the f th feature A f is defined as: where R(A f ) and B(A f ) represent the value of the f th feature in sample instances R and B, respectively. ReliefF algorithm performs mainly by comparing the feature weight values w(A f ) and thresholds δ. If the feature weight value w(A f ) is greater than the threshold δ, this feature will be added into the preselected feature subset S. So, it is beneficial for improving the recognition accuracy by using the new-building feature subset.
VI. EXPERIMENTS
In this section, we report some results of MER experiments on the databases of SMIC and CASMEII by using three feature extraction methods, LBP-TOP, HOG-TOP, and HIGO-TOP. From these results, we will make further analysis of the effect on different combination of regions and their improvement with feature selection.
A. EXPERIMENTAL SETTING
In the experiments, Leave-One-Subject-Out Cross Validation (LOSO-CV) [51] protocol with Chi square (chi2) kernel [52] of Support Vector Machine (SVM) [53] is used as the classifier for the MER.
For the LOSO-CV, take the CASME II as an example, it collected the MEs video sequences from 26 subjects. Each time the micro-expression video sequence of 25 subjects is gathered as a training set for machine learning, the remaining subject will be used as a test. Then, the classifier SVM can build a model based on the training set. Applying this model to the target dataset can obtain the recognition accuracy. The classification accuracy (Acc) is the ratio of the number of correctly classified samples to the total number of samples (247) in the experiment.
B. COMPARISON OF MER BETWEEN GLOBAL AND LOCAL REGION
The mouth and eyes (''1, 2, 4'') are the main parts in expressions, so we choose them as the basic regions. Based on the basic region (''1, 2, 4'') and the distribution of the divided facial local areas, we can form different region combinations by assembling the basic region and other regions.
1) PARAMETERS SETTING
According to the division code in Fig. 7, we mainly designe five local region combinations for the comparison with global region. In this sub-experiment, we set (P, R) at different values with (4, 1), (8,2), and (8,3). When extracting the feature vectors in each local area, the original image will be divided according to (α, β, τ ), where α, β, and τ are the numbers divided along the axis of X, Y, and T. For this sub-experiment, we set the number α = β referring to the setting in [16]. We vary the value of the blocking area (α, β, τ ) to test the effect of classification recognition under different local area combination. Experiments are performed on CASME II by using LBP-TOP as feature descriptor. The concrete combination of the feature extraction areas and experimental results are shown in Table 7 and Table 8. The best accuracy in each combination areas are highlighted in bold in Table 8. We take the average accuracy for the comparison in Table 8.
2) DISCUSSION AND CONCLUSION
We can see that the average accuracy of the region combinations B, C, D, E, and F all performed better than that in the global region A. The increased accuracy values in the combinations are 4.05%, 3.24%, 0.49%, 5.43%, and 3.32%, respectively. It proves that the global region contains some irrelevant information for MER. Table 8 shows that the best recognition performance of region combination C performs better than the combination B with the increased region 3. But that does not mean that increasing regions can improve the identification accuracy. For example, the best recognition performance in region combination D has decreased with the increased region 7. It suggests that the Acc of the MEs will also be reduced when extracting the features with little relevance.
C. COMPARISON OF FEATURE SELECTION IN LOCAL REGIONS
In this experiment, we compare the Acc before and after feature selection with the three spatial-temporal feature descriptors (LBP-TOP, HOG-TOP, HIGO-TOP) on two databases (CASME II and SMIC). From Table 8, we can see that region combination E has the best performance in average accuracy. To testify the validity of the framework, we conduct the following experiments under the region E. The ReliefF [54] algorithm is selected for feature selection.
1) PARAMETERS SETTING
For the LBP-TOP feature vectors, we design the same settings of (P, R) and (α, β, τ ) on CASME II and SMIC databases. Similarly, we set the number of bins as 4 or 8 for the HOG-TOP and HIGO-TOP descriptors on both databases. Training and testing are all done by using Chi-Square SVM with LOSO-CV. The MER results on CASME II and SMIC databases are shown in Tables 9-14. Each table records VOLUME 8, 2020 the dimension (D), the average time (t) of classification, the recognition accuracy (Acc) and the differentials of the selected and non-selected feature. The best accuracy values in each part are highlighted in bold. The arrow in the table intuitively reflects the tendency of the increased accuracy after feature selection.
2) DISCUSSION AND KEY FINDINGS
For the proposed framework, we used the ReliefF algorithm to operate in the combination E. Tables 9-11 and Tables 12-14 respectively demonstrate the comparison between the selected and non-selected features on both MEs databases. Firstly, with the different parameter settings, dimension of the best MER performance in these experiments are not exactly fixed. Due to varying the parameters (α, β, τ ), we can divide the image into various blocks with corresponding dimensions. For most of the results, the number of the selected best dimensions will be increased with the increasing of original dimensions.
Secondly, from the time comparison between the selected and non-selected features in Tables 9-14, it can be found that the time after feature selection can reduce at least one hundred times. For example, running the MER with the parameter settings (4, 1) and (3,3,1) in Table 9, the average time spent in non-selected feature recognition is 3.8258s, while it only takes 0.0359s in selected feature recognition. Because of the reduction in dimension, the running speed of data is faster and the time for extracting features is also shorten up.
Thirdly, the combination E contains the effective regions for feature extraction. In addition, the ReliefF can select the subset with a high-weight value. From the results of LBP-TOP, HOG-TOP, and HIGO-TOP shown in Tables 9-14, we can find that these selected feature accuracies are all superior to the non-selected features. The largest differentials under three spatial-temporal feature descriptors are 12.55%, 11.34% and 11.33% in CASME II (15.85%, 17.08% and 15.85% in SMIC). Besides, the average values of the differential in CASME II are 6.65%, 3.76% and 6.77% (12.96%, 8.45% and 11.41% in SMIC). Even though the differentials perform less obvious in CASME II, the proposed framework still achieves contented results.
VII. CONCLUSION
In this paper, we mainly propose a novel framework combining the region division and feature selection method ReliefF for automatic MER. The FACS and the facial deformation identification model method are applied to locate the facial local areas. With the assistance of the forty-nine facial key points and the distribution of thirty-two AUs, the global region can be divided into seven regions. Then, the LBP-TOP, HOG-TOP, and HIGO-TOP descriptors are adopted to extract features from MEs respectively. Ultimately, the high-dimensional feature vectors will gain an obvious dimension reduction and higher recognition accuracy by using the novel framework.
There are two important conclusions drawn from the two groups of experiments. Firstly, the best feature subset can be obtained after the feature selection in local areas, which excludes the influence of massive redundant information on classification. Secondly, the combination of local regions can make the most of the internal structure information and significantly improve the MER accuracy when comparing with the global region.
Overall, there are still some challenges and limitations that we need to overcome in the future. The first challenge is that there is still a long way to identify the MER accurately on the basic of current methods. When developing the MER into different fields, we also need to further investigate and work out more creative methods on different databases. In addition, new spontaneous ME datasets are still in demand, which are expected to collect with a large amount of samples and more detailed emotion categories. It is a long-term and arduous task to use computers to automatic discriminate MEs more accurately.
[53] C. Junli and J. Licheng, ''Classification mechanism of support vector machines,'' in Proc. ATEEQ UR REHMAN (Member, IEEE) received the Ph.D. degree from the University of Southampton, in 2017. He worked with the Next Generation Wireless Research Group, University of Southampton, where he focused reliable data transmission in cognitive radio networks. He is currently working as a Lecturer with the Department of Computer Science, Abdul Wali Khan University, Mardan, Pakistan. His main research interests are in next-generation wireless communications, cognitive radio networks, the Internet of Things, the Internet of Vehicles, blockchain technology, and differential privacy. He was a recipient of the several academic awards, such as the Faculty Development Program, Islamic University of Technology (OIC) Dhaka, Bangladesh Distinction Award, and Higher Education Commission Pakistan OIC scholarship for undergraduate studies. VOLUME 8, 2020 | 7,410.2 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
The small GTPase ARF6 regulates GABAergic synapse development
ADP ribosylation factors (ARFs) are a family of small GTPases composed of six members (ARF1–6) that control various cellular functions, including membrane trafficking and actin cytoskeletal rearrangement, in eukaryotic cells. Among them, ARF1 and ARF6 are the most studied in neurons, particularly at glutamatergic synapses, but their roles at GABAergic synapses have not been investigated. Here, we show that a subset of ARF6 protein is localized at GABAergic synapses in cultured hippocampal neurons. In addition, we found that knockdown (KD) of ARF6, but not ARF1, triggered a reduction in the number of GABAergic synaptic puncta in mature cultured neurons in an ARF activity-dependent manner. ARF6 KD also reduced GABAergic synaptic density in the mouse hippocampal dentate gyrus (DG) region. Furthermore, ARF6 KD in the DG increased seizure susceptibility in an induced epilepsy model. Viewed together, our results suggest that modulating ARF6 and its regulators could be a therapeutic strategy against brain pathologies involving hippocampal network dysfunction, such as epilepsy.
Introduction
ADP-ribosylation factor 6 (ARF6) belongs to the ARF protein family of small GTPases known to regulate actin remodeling and membrane trafficking [1]. Like other small GTPases, ARFs function as molecular switches by cycling active GTP-bound and inactive GDP-bound forms, a process that is tightly regulated by guanine nucleotide exchange factors (GEFs) and GTPase-activating proteins (GAPs) [2]. Functionally, ARF1 and ARF6 have been the most extensively studied in neurons; ARF1 is essential for regulating transport between intra-Golgi compartments, whereas ARF6 regulates the recycling of endosomes and receptors to and from the plasma membranes, and modulates cortical cytoskeletal organization [1]. In particular, the roles of ARF6 at excitatory synapses have been well described. For example, ARF6 promotes the conversion of immature filopodia to mature dendritic spines, and enhances the stability of early spines in cultured hippocampal neurons by regulating dendritic development, and axonal elongation and branching in postsynaptic neurons during neuronal development [3][4][5][6]. ARF6 also controls the endocytosis of synaptic vesicles in presynaptic neurons [7]. Moreover, loss of ARF6 function induces activity-dependent accumulation of endosomal structures and increases release-competent docked synaptic vesicles, suggesting an active role of ARF6 in regulating cycling and synaptic vesicle pools at presynaptic neurons [8].
Despite these overarching studies, the roles of ARF6 at GABAergic synapses are relatively poorly understood. However, it is possible to propose that normal ARF6 function is crucial for GABAergic synapse development, as evidenced by reported actions of ARF6 GEFs and GAPs at GABAergic synapses. GIT1 regulates GABA A R trafficking and GABAergic synaptic transmission [27], whereas IQSEC3/BRAG3 directly interacts with gephyrin to regulate GABAergic synapse formation [17,[28][29][30].
In the present study, we showed that ARF6 activity is critical for GABAergic synapse development and hippocampal network activity. ARF6 knockdown (KD) in cultured hippocampal neurons decreased GABAergic synapse density, an effect that was completely rescued by ARF6 wild-type (WT) and ARF6-T157A (a fast cycling mutant), but not by ARF6-T27 N (a dominant-negative mutant). In addition, ARF6 KD in the mouse hippocampal DG area reduced GABAergic synapse density, which in turn affected the activity of neuronal populations in the mouse hippocampus and increased susceptibility to kainic acid (KA)-induced seizures.
Animals and ethics statement
C57BL/6 N mice (purchased from Jackson Laboratory, ME, USA; stock number: 013044) were maintained and handled in accordance with protocols approved by the Institutional Animal Care and Use Committee of DGIST under standard, temperature-controlled laboratory conditions. Mice were maintained on a 12:12 light/dark cycle (lights on at 7:00 am and off at 7:00 pm), and received water and food ad libitum. All experimental procedures were performed on male mice. Pregnant rats purchased from Daehan Biolink were used for in vitro culture of dissociated cortical or hippocampal neurons. All procedures were conducted according to the guidelines and protocols for rodent experimentation approved by the Institutional Animal Care and Use Committee of DGIST.
Stereotaxic surgery and virus injections
For stereotaxic delivery of recombinant AAVs, 9-weekold C57BL/6 N mice were anesthetized by inhalation of isoflurane (3-4%) or intraperitoneal injection of a saline solution containing 2% 2,2,2-tribromoethanol (Sigma), and secured in a stereotaxic apparatus. Viral solutions were injected with a Hamilton syringe using a Nanoliter 2010 Injector (World Precision Instruments) at a flow rate of 100 nl/min (injected volume, 0.6 μl). The coordinates used for stereotaxic injections into the hippocampal DG of mice were as follows: anteroposterior (AP), − 2.2 mm; medial-lateral (ML), ± 1.3 mm; and dorsal-ventral (DV), 2.2 mm from bregma. Each injected mouse was returned to its home cage and after 2 weeks was used for scoring seizure-like behaviors, immunohistochemical analyses, or electrophysiological recordings.
Immunoblot analysis of infected brain tissues
Brain regions infected with the indicated AAVs were homogenized in 0.32 M sucrose/1 mM MgCl 2 containing a protease inhibitor cocktail (Thermo-Fisher Scientific) using a Precellys Evolution tissue homogenizer (Bertin Co.). After centrifuging homogenates at 1000×g for 10 min, the supernatant was transferred to a fresh microcentrifuge tube and centrifuged at 15,000×g for 30 min. The resulting synaptosome-enriched pellet (P2) was resuspended in lysis buffer and centrifuged at 20,800×g, after which the supernatant was analyzed by Western blotting with anti-ARF6 antibodies.
Seizure behavior scoring
Nine-week-old male C57BL/6 N mice stereotactically injected with the indicated AAVs were administered KA (20 mg/kg; Sigma Cat. No. K0250) or saline (control), and the resulting seizure behaviors were video-recorded for the next 2 h. Seizure susceptibility was measured by rating seizures every 3 min on a scale of 0 to 5 as follows: 0, no abnormal behavior; 1, reduced motility and prostate position; 2, partial clonus; 3, generalized clonus including extremities; 4, tonic-clonic seizure with rigid paw extension; and 5, death.
Data analysis and statistics
All data are expressed as means ± SEM. All experiments were repeated using at least three independent cultures, and data were statistically evaluated using a Mann-Whitney U test, analysis of variance (ANOVA) followed by Tukey's post hoc test, Kruskal-Wallis test (one-way ANOVA on ranks) followed by Dunn's pairwise post hoc test, or paired two-tailed t-test, as appropriate. Prism7.0 (GraphPad Software) was used for analysis of data and preparation of bar graphs. P-values < 0.05 were considered statistically significant (individual p-values are presented in figure legends).
ARF6 is localized at both GABAergic synapses and glutamatergic synapses in cultured hippocampal neurons
Our previous study demonstrating that the ARF-GEF activity of IQSEC3 is required for maintenance of GABAergic synapse structure raised the possibility that normal levels of ARF activity are crucial for GABAergic synapse development. To date, however, the precise localization of native ARF proteins in neurons has remained unclear, and only a few ARF regulators (i.e., GEFs and GAPs) have been reported to localize to GABAergic synaptic sites. To address the role of ARF6 proteins in mediating GABAergic synapse development, we first performed an immunofluorescence analyses of the synaptic localization of ARF6 in cultured cortical neurons (DIV14) ex utero electroporated with ARF6-HA-IRES-EGFP and gephyrin-tdTomato at E15.5 (our ARF6 antibody was not suitable for immunocytochemical applications in brain sections) (Fig. 1a-c). We found that a subset of ARF6-HA immunoreactive signals colocalized with gephyrin-tdTomato puncta (13.9 ± 2.2%), whereas majority of ARF6-HA signals localized to excitatory synaptic spines (38.9 ± 8.6%) or nonsynaptic sites (47.2 ± 9.5%), suggesting that a fraction of ARF6 proteins is localized to GABAergic synapses (Fig. 1a-c).
Knockdown of ARF6 decreases inhibitory synaptic density in cultured neurons
To determine whether ARF6 impacts GABAergic synapse development, we first generated shRNA lentiviral vectors targeting ARF1 and ARF6 and confirmed their efficacy (Fig. 2a-d). Quantitative reverse transcription-polymerase chain reaction (qRT-PCR) showed that ARF1 and ARF6 mRNA levels were decreased by~85% and~90%, respectively, in cultured rat cortical neurons infected with the corresponding shRNA-expressing lentiviruses (Fig. 2b). In addition, semi-quantitative immunoblotting showed that shRNA targeting ARF6 decreased endogenous ARF6 protein levels (Fig. 2c, d). We then transfected cultured hippocampal neurons at DIV8 with validated shRNA lentiviral vectors targeting Arf1 (sh-Arf1), Arf6 (sh-Arf6) or EGFP only (sh-Control), and immunostained transfected neurons at DIV14 for the excitatory presynaptic marker VGLUT1, the excitatory postsynaptic marker PSD-95 (post-synaptic density protein 95), the inhibitory presynaptic marker GAD67, and the inhibitory postsynaptic markers, gephyrin and GABA A Rγ2 (Fig. 2e-g). As previously reported [3], knockdown of ARF1 (ARF1 KD) or ARF6 (ARF6 KD) significantly reduced the density of PSD-95 + and/or and VGLUT1 + puncta ( Fig. 2e-g). Notably double-KD of ARF1 and ARF6 (ARF1/6 DKD) did not further decrease excitatory synaptic density compared with KD of either protein alone, suggesting that both ARF1 and ARF6 share common pathways in the maintenance of excitatory synapse structure in hippocampal neurons ( Fig. 2e-g). Intriguingly, ARF6 KD also reduced the density of puncta positive for Fig. 1 ARF6 is localized to GABAergic synapses. a, Representative images of cultured mouse cortical neurons from mouse embryos electroporated at E15.5 with Arf6-HA-IRES-EGFP and gephyrin-tdTomato. Cultured cortical neurons were subsequently immunostained for HA at DIV14. Scale bars, 10 μm. b Summary data showing the average intensity of ARF6 at the dendritic spine and gephyrin + puncta. Data are presented as means ± SEMs (n = 40-45 ARF6 + immunoreactive puncta). c Pie chart showing the proportion of HA-ARF6 immunoreactive signals at dendritic spines, gephyrin-positive inhibitory synapses, and non-synaptic sites (spine-negative and gephyrin-negative immunoreactive puncta) Fig. 2 Effects of ARF1 or ARF6 KD on synaptic structures in cultured hippocampal neurons. a Design of lentiviral shRNA vectors for KD of ARF1 or ARF6. Boxes indicate shRNA target sequences in Arf1 and Arf6. Abbreviations: H1, human H1 promoter; IRES, internal ribosome entry sequence; Ub, ubiquitin promoter. b Arf1 and Arf6 mRNA levels in cultured cortical neurons infected at DIV3 with lentiviruses expressing sh-Arf1 or sh-Arf6 were measured by qRT-PCR. mRNA was prepared at DIV10. Dashed line, 85% KD cutoff level for tests of biological effects. Data are presented as means ± SEMs (n = 3 independent experiments; *p < 0.05 vs. control; Mann-Whitney U test). c Cultured cortical neurons were infected with lentiviruses expressing sh-Arf6 at DIV3 and then immunoblotted with the indicated antibodies at DIV10. d Quantification of ARF6, IQSEC3, gephyrin, and PSD-95 levels from c, normalized to control. Data are presented as means ± SEMs of three experiments (***p < 0.001 vs. control; Mann-Whitney U test). e Representative images of cultured hippocampal neurons transfected at DIV8 with lentiviral constructs expressing EGFP alone (Control), sh-Arf1, sh-Arf6 or cotransfected with sh-Arf1 and sh-Arf6 (sh-Arf1/Arf6). Neurons were analyzed by double-immunofluorescence labeling for EGFP (blue; pseudo-colored) and VGLUT1, PSD-95, GAD67, gephyrin or GABA A Rγ2 (red) at DIV14. Scale bar, 10 μm (applies to all images). f, g Summary data showing the effects of ARF1 KD, ARF6 KD or ARF1 and ARF6 DKD (double-knockdown) in neurons on synaptic puncta density (f) and synaptic puncta size (g). Data are presented as means ± SEMs (2-3 dendrites per transfected neurons were analyzed and group-averaged; n = 22-30 neurons; *p < 0.05, **p < 0.01, ***p < 0.001 vs. control; non-parametric ANOVA with Kruskal-Wallis test followed by post hoc Dunn's multiple comparison test) GAD67, gephyrin, or GABA A Rγ2; in contrast, ARF1 KD did not affect GABAergic synaptic puncta density (Fig. 2eg). To investigate whether the modulation of inhibitory synaptic density by ARF6 requires ARF activity, we transfected cultured neurons at DIV8 with a lentiviral expression vector for EGFP only (shControl), ARF6-shRNA, or ARF6-shRNA and an shRNA resistant full-length ARF6 expression vector, and immunostained transfected neurons at DIV14 for various GABAergic synaptic markers. We found that the ARF6 KD-induced reduction in GABAergic synaptic puncta density, monitored by either a single synaptic marker (GAD67 or gephyrin) or both pre-and postsynaptic markers (VGAT and gephyrin), was completely rescued by coexpression of shRNA-resistant ARF6-WT or ARF6-T157A (a fastrecycling mutant), but not by coexpression of ARF-T27 N (a GTP-binding-defective mutant; Fig. 3a-f) [3]. In addition, reduced surface levels of GABA A Rγ2, a critical component of the synaptic GABA A receptor, by ARF6 KD Fig. 3 ARF6 activity is required for GABAergic synapse development in cultured neurons. a Cultured hippocampal neurons were transfected with a lentiviral vector expressing sh-Control, sh-Arf6, or coexpressing sh-Arf6 and shRNA-resistant ARF6 expression vectors (ARF6-WT, ARF6-T27 N, or ARF6-T157A) at DIV8 and analyzed at DIV14 by double-immunofluorescence staining with antibodies to EGFP (blue) and the indicated synaptic markers (GAD67, gephyrin, or GABA A Rγ2). b Summary data showing the effects of ARF6 KD on synaptic puncta density (left) and synaptic puncta size (right), measured using GAD67, gephyrin, and GABAARγ2 as synaptic markers. More than two dendrites per transfected neuron were analyzed and groupaveraged. Data are presented as means ± SEMs from three independent experiments (n = 12-18 neurons; *p < 0.05, **p < 0.01, ***p < 0.001 vs. control; non-parametric ANOVA with Kruskal-Wallis test followed by post hoc Dunn's multiple comparison test). c Cultured hippocampal neurons were transfected with a lentiviral vector expressing sh-Control, sh-Arf6, or coexpressing sh-Arf6 and shRNA-resistant ARF6 expression vectors (ARF6-WT, ARF6-T27 N, or ARF6-T157A) at DIV8 and analyzed at DIV14 by double-immunofluorescence staining with antibodies to EGFP (blue) and surface GABA A Rγ2 (red). d Summary data showing the effects of ARF6 KD on the density of surface GABAARγ2 + puncta (left) and size of surface GABA A Rγ2 + puncta (right). More than two dendrites per transfected neuron were analyzed and group-averaged. Data are presented as means ± SEMs from three independent experiments (n = 12-18 neurons; *p < 0.05, **p < 0.01, ***p < 0.001 vs. control; non-parametric ANOVA with Kruskal-Wallis test followed by post hoc Dunn's multiple comparison test). e Cultured hippocampal neurons were transfected with a lentiviral vector expressing sh-Control, sh-Arf6, or coexpressing sh-Arf6 and shRNA-resistant ARF6 expression vectors (ARF6-WT, ARF6-T27 N, or ARF6-T157A) at DIV8 and analyzed at DIV14 by tripleimmunofluorescence staining with antibodies to EGFP (blue), VGAT (red) and gephyrin (green). f Summary data showing the effects of ARF6 KD on the colocalized puncta density of VGAT and gephyrin (left) and size of colocalized puncta (right). More than two dendrites per transfected neuron were analyzed and group-averaged. Data are presented as means ± SEMs from three independent experiments (n = 16 neurons; ***p < 0.001 vs. control; non-parametric ANOVA with Kruskal-Wallis test followed by post hoc Dunn's multiple comparison test) was normalized by coexpression of shRNA-resistant ARF6-WT or ARF6-T157A (Fig. 3c-d). Notably, expression of ARF6-Q67L (a GTP hydrolysis-resistant mutant) in either cultured hippocampal neurons or the DG of juvenile mice drastically altered neuronal morphology, precluding further analyses (data not shown; see also [3]). These results suggest that normal GTP-GDP cycling of ARF6 is critical for normal operation of GABAergic synapses.
ARF6 is required for GABAergic synapse development in vivo
To extend these observations in neurons in vivo, we used mice stereotactically injected with AAVs that express either sh-Arf6 (ARF6 KD) or sh-Control (Control) in the hippocampal DG and performed immunohistochemical analyses to probe whether ARF6 KD also influences structural aspects of GABAergic synapse development (Fig. 4a). ARF6 KD efficiency and shRNA-resistant ARF6 rescue Fig. 4 ARF6 activity is required for GABAergic synapse development in vivo. a Schematic diagram of AAV vectors expressing sh-Arf6 and HAtagged ARF6 and its mutants (T27 N and T157A) used in c-g. b Immunoblotting analyses with ARF6 antibodies showing the KD efficacy of sh-ARF6 in vivo. Lysates from mouse brain stereotactically injected with AAVs expressing sh-ARF6 were collected and immunoblotted with anti-ARF6 antibodies. Anti-β-actin antibodies were used as normalization controls. c Representative images illustrating EGFP expression after AAV injection into the hippocampal DG region. Brain sections were immunostained for EGFP (green) or HA (red) and counterstained with DAPI (blue). Scale bar: 20 μm (applies to all images). d Representative images showing GABA A Rγ2 + puncta in the DG of mice stereotactically injected with AAVs expressing Control or sh-Arf6, or coexpressing sh-Arf6 and the indicated ARF6 variants (ARF6-WT, ARF6-T27 N, or ARF6-T157A). Scale bar, 20 μm (applies to all images). Abbreviations: MOL, molecular layer; GCL, granule cell layer. e Quantification of the density and size of GABA A Rγ2 + puncta per tissue area. Data are presented as means ± SEMs (n = 20-25 sections/4-5 mice; *p < 0.05, **p < 0.01, ***p < 0.001 vs. control; non-parametric ANOVA with Kruskal-Wallis test followed by post hoc Dunn's multiple comparison test). f Representative images of AAV-infected neurons in DG molecular and hilar regions immunostained for the excitatory marker VGLUT1. Scale bar: 20 μm (applies to all images). g Quantification of VGLUT1 + puncta intensity per tissue area. Data are presented as means ± SEMs from 3 to 5 independent experiments (n = 22-30 sections/4-6 mice; *p < 0.05, **p < 0.01 vs. control; non-parametric ANOVA with Kruskal-Wallis test, followed by post hoc Dunn's multiple comparison test) vectors were validated by Western blotting with ARF6 antibodies and immunofluorescence analysis with HA antibodies, respectively (Fig. 4b, c). Quantitative immunofluorescence analyses revealed a significant decrease in the puncta intensity of the GABAergic synaptic marker GABA A Rγ2 in the DG granular cell layer and DG hilus and molecular layers (Fig. 4d, e). These changes in GABA A Rγ2 intensity in the DG of ARF6-KD mice were completely rescued by coexpression of shRNA-resistant ARF6-WT or ARF6-T157A, but not by coexpression of shRNA-resistant ARF6-T27 N (Fig. 4d, e). In keeping with previous observations, quantitative immunofluorescence analyses of the excitatory synaptic marker VGLUT1 revealed a reduction in the density of VGLUT1 + puncta in the DG molecular layer and hilus (Fig. 4f, g). Collectively, these data suggest that ARF6 is also required for GABAergic synapse development, similar to its established action at glutamatergic synapses.
Loss of ARF6 accelerates seizure susceptibility in an ARF activity-dependent manner
We next sought to determine whether loss of ARF6 induces network dysfunctions, which are often associated with impaired GABAergic synapse formation and function and a resulting imbalance in excitation/inhibition (E/I) ratio at synaptic and circuit levels [30,35]. To test the effect of ARF6 KD on seizure susceptibility, we employed an acute kainic acid (KA)-induced epileptic mouse model, which has been extensively used to dissect molecular mechanisms underlying initial epileptogenesis event(s) that transforms normal neural networks into hypersynchronous networks. After stereotactic injection of a series of AAV vectors expressing ARF6 WT and its mutant variants (T27 N and T157A) [3] into the DG of ARF6-deficient mice, mice were intraperitoneally administered KA (20 mg/kg) and their seizure behaviors were scored (Fig. 5a). The severity of KA-induced convulsive seizures was assessed by scoring responses on a scale from 0 (no abnormal behavior) to 5 (death) using a revised Racine's scale. Average seizure scores for the first 40 min after KA administration were comparable in ARF6-KD mice (1.41 ± 0.10) and control mice (1.33 ± 0.08) (Fig. 5b); average seizure scores for the next 40 min were 2.24 ± 0.18 and 1.75 ± 0.11 in ARF6-KD and control mice, respectively, indicating that the severity of seizure behaviors persisted in these mice (Fig. 5c), and average seizure scores for the last 40 min were~1.6 fold higher in ARF6-KD mice than in control mice (Fig. 5d). Importantly, the increased seizure susceptibility observed in ARF6-KD mice was normalized by coexpression of shRNA-resistant ARF6 WT (2.15 ± 0.15 for the last 40 min) or ARF6-T157A (2.12 ± 0.07 for the last 40 min), but not by coexpression of shRNA-resistant ARF6-T27 N (2.58 ± 0.30 for the second 40 min and 3.14 ± 0.26 for last under each experimental condition (n = 9 mice/condition; **p < 0.01, ***p < 0.001 vs. control; Kruskal-Wallis test followed by Dunn's post hoc test). e Quantification of latency to the first seizure after KA administration under each condition (n = 9 mice/condition; **p < 0.01, Kruskal-Wallis test followed by Dunn's post hoc test). f Quantification of time spent in seizure under each condition (n = 9 mice/condition; **p < 0.01, ***p < 0.001 vs. control; Kruskal-Wallis test followed by Dunn's post hoc test) 40 min) (Fig. 5c, d). ARF6 KD decreased seizure latency, in association with an increase in the total time spent in seizures, both of which were normalized by expression of shRNA-resistant ARF6 WT and ARF6-T157A, but not shRNA-resistant ARF6-T27 N (Fig. 5e, f).
Discussion
Molecular components of synapses have been identified, mostly by mass spectrometry analyses [36,37]. Functional categorization of these proteins has revealed a number of GEFs and GAPs for small GTPases and shown that they constitute roughly~10% of postsynaptic density proteins. Although many of these regulators have been studied at glutamatergic synapses, their roles at GABAergic synapses remain largely undefined. Recent efforts to identify GABAergic synaptic components and related molecular mechanisms have contributed to our understanding of how neural circuits are functionally balanced. However, even whether small GTPases and their regulators are expressed at GABAergic synapses has not been analyzed. In this study, we provide evidence that a fraction of ARF6 protein is localized to GABAergic synapses and functions to regulate GABAergic synapse number and hippocampal network activity. We demonstrated that an ARF6 deficiency leads to impaired GABAergic synapse development in an ARF6 activity-dependent manner in both cultured neurons and in vivo. In addition, the resultant GABAergic synaptic defect induced by ARF6 KD in the hippocampal DG area led to increased seizure susceptibility in mice, possibly owing to disinhibition of network activity in the hippocampal DG.
Strikingly, although the current study clearly showed that effects of ARF6 KD impact both glutamatergic and GABAergic synapse development in both hippocampal cultured neurons and mouse hippocampal DG region (Figs. 3 and 4), we speculate that phenotypic manifestations of ARF6 KD-triggered synapse loss are more prominent at GABAergic synapses, as shown by increased seizure susceptibility in ARF6-KD mice. Moreover, ARF1 KD specifically reduced the density of glutamatergic, but not GABAergic, synapses in cultured hippocampal neurons, suggesting that different small GTPases may participate in the development of distinct synapse types. Importantly, single KD of ARF1 or ARF6 decreased excitatory synapse density, whereas double KD of ARF1 and ARF6 had no further deleterious effect (Fig. 2), suggesting that ARF1 and ARF6 converge on the same downstream signaling cascades to regulate excitatory synapse development.
Similar to the mechanistic action of ARF6 at glutamatergic synapses, our study clearly demonstrated that active conversion of GDP-bound to GTP-bound states, but not the rate of conversion per se, are required for the action of ARF6 at GABAergic synapses (Fig. 3). In this regard, regulators of ARF6 activity, such as IQSEC3 (as a GEF) and GIT1 (as a GAP), act together. However, our observations suggest that ARF6 is not concentrated at synaptic sites (Fig. 1), whereas these regulators exhibit a relatively higher degree of localization at GABAergic synaptic sites [23,35]. Thus, it is likely that these regulators also perform ARF6-independent functions.
Proper neuronal and network functions rely on balanced excitation and inhibition at diverse levels. Imbalances in the E/I ratio are responsible for the onset and/ or progression of various neurological disorders, including epilepsy [28]. Thus, perturbation of ARF6-mediated GABAergic synapse development also contributes to defects in synaptic and circuit inhibition and the concomitant increase in the occurrence of epileptic seizures (Fig. 5). This idea is also supported by our molecular replacement experiments using various ARF6 variants, which showed that ARF6-T27 N failed to rescue ARF6-KD-induced epileptic phenotypes in mice.
Future studies should further dissect the detailed mechanisms by which ARF6 regulates various aspects of GABAergic synapse development. An intriguing possibility is that ARF6 directly regulates the exocytosis/endocytosis of GABA A receptors. This idea is reminiscent of documented roles of ARF6 regulators (e.g. IQSEC1 and IQSEC2) at excitatory synapses, where IQSEC1 and IQSEC2 promote endocytosis of AMPA receptors [18,19,38]. However, epileptic-like behaviors observed in ARF6-KD mice could not be solely attributed to disruption of ARF6-mediated GABAergic synapse signaling, considering the well-documented roles of ARF proteins at glutamatergic synapses. It remains to be determined whether ARF6 differentially acts at specific synapse types and specific neurons. In addition, whether other ARFs besides ARF1 and ARF6 also perform similar or distinct actions at glutamatergic and GABAergic synapses should be investigated. Answering these issues will make an important contribution to our currently incomplete understanding of molecular organization at GABAergic synapses. | 5,553.8 | 2020-01-06T00:00:00.000 | [
"Biology",
"Medicine"
] |
Engineering of Aeromonas caviae Polyhydroxyalkanoate Synthase Through Site-Directed Mutagenesis for Enhanced Polymerization of the 3-Hydroxyhexanoate Unit
Polyhydroxyalkanoate (PHA) synthase is an enzyme that polymerizes the acyl group of hydroxyacyl-coenzyme A (CoA) substrates. Aeromonas caviae PHA synthase (PhaCAc) is an important biocatalyst for the synthesis of a useful PHA copolymer, poly[(R)-3-hydroxybutyrate-co-(R)-3-hydroxyhexanoate] [P(3HB-co-3HHx)]. Previously, a PhaCAc mutant with double mutations in asparagine 149 (replaced by serine [N149S]) and aspartate 171 (replaced by glycine [D171G]) was generated to synthesize a 3HHx-rich P(3HB-co-3HHx) and was named PhaCAc NSDG. In this study, to further increase the 3HHx fraction in biosynthesized PHA, PhaCAc was engineered based on the three-dimensional structural information of PHA synthases. First, a homology model of PhaCAc was built to target the residues for site-directed mutagenesis. Three residues, namely tyrosine 318 (Y318), serine 389 (S389), and leucine 436 (L436), were predicted to be involved in substrate recognition by PhaCAc. These PhaCAc NSDG residues were replaced with other amino acids, and the resulting triple mutants were expressed in the engineered strain of Ralstonia eutropha for application in PHA biosynthesis from palm kernel oil. The S389T mutation allowed the synthesis of P(3HB-co-3HHx) with an increased 3HHx fraction without a significant reduction in PHA yield. Thus, a new workhorse enzyme was successfully engineered for the biosynthesis of a higher 3HHx-fraction polymer.
INTRODUCTION
Polyhydroxyalkanoates (PHAs) are bio-based polyesters produced by a wide range of microorganisms as carbon and energy storage materials. The wild-type strain H16 of Ralstonia eutropha (or Cupriavidus necator) is one of the best-known PHA-producing bacteria (Sudesh et al., 2000;Steinbüchel and Hein, 2001). There has been long-standing interest in using PHAs as biodegradable bioplastics that could serve as alternatives to petrochemical plastics. Recently, PHAs have attracted attention as biodegradable and biocompatible thermoplastics for use in a wide range of agricultural, marine, and medical applications because of their excellent biodegradability (Akiyama et al., 2003).
Polyhydroxyalkanoates mainly consist of short-chain length (SCL; C3 to C5) and/or medium-chain-length (MCL; C6 and longer) monomers (Rehm, 2003). Among the SCL-PHAs, poly[(R)-3-hydroxybutyrate] [P(3HB)] is the most common bacterial PHA in nature. Although P(3HB) is a highly crystalline, hard, and brittle polymer, it begins to decompose at a temperature close to its melting point, making it difficult to process this polymer (Lehrle and Williams, 1994). Copolymerization of MCL monomers with a 3HB unit leads to notable changes in the physical properties of PHA, depending on the molecular structure and copolymer composition (Noda et al., 2005). The best-studied SCL/MCL-PHA copolymer is poly(3HB-co-3-hydroxyhexanoate) [P(3HB-co-3HHx)]. In this polymer, an important aspect is to control the level of the 3HHx monomer fraction for practical application in many fields. For example, the elongation at break increases from 5 to 760% by increasing the 3HHx fraction from 0 to 15 mol% (Doi et al., 1995;Chen et al., 2000;Andreeßen et al., 2014). P(3HB-co-3HHx) with 10-15 mol% 3HHx fraction can be used as an alternative to conventional plastics such as polypropylene and polyethylene (Shimamura et al., 1994;Chen et al., 2000;Andreeßen et al., 2014). However, it is difficult to efficiently produce P(3HBco-3HHx) with such a high 3HHx fraction. Thus, significant efforts have been made to increase the 3HHx fraction in P(3HBco-3HHx) biosynthesis (Jian et al., 2010;Budde et al., 2011;Arikawa and Matsumoto, 2016a).
The bacterium Aeromonas caviae is an original strain that can produce P(3HB-co-3HHx) from plant oils and fatty acids (Shimamura et al., 1994). Aeromonas caviae PHA synthase (PhaC Ac ) shows substrate specificity toward 3HB and 3hydroxyvalerate monomers, as well as the 3HHx monomer . From this point of view, PhaC Ac is a valuable biocatalyst for production of P(3HB-co-3HHx). However, the polymer production capacity of A. caviae is not superior to that of other PHA producers. With the help of genetic engineering, recombinant R. eutropha expressing PhaC Ac was generated, which demonstrated remarkable enhancement of P(3HB-co-3HHx) production from plant oils Doi, 1997, 1998;Kahar et al., 2004).
Additionally, to increase the 3HHx fraction in P(3HB-co-3HHx), various strategies have been developed. One effective approach is to increase the expression of (R)-specific enoylcoenzyme A (CoA) hydratase (PhaJ4b Re ), which provides R-3-hydroxyacyl-CoA precursors for PHA synthesis from the β-oxidation cycle, to reinforce the supply of the 3HHx monomer (Arikawa and Matsumoto, 2016a). In contrast, the 3HHx fraction in the polymer was increased by deleting the gene for the 3HB supplier acetoacetyl-CoA reductase (PhaB Re ) to suppress the 3HB monomer supply; however, the PHA yield decreased (Budde et al., 2011).
Another approach to increase the 3HHx fraction in PHA is the engineering of PHA synthase (Kichise et al., 2002;Tsuge et al., 2004Tsuge et al., , 2007aWatanabe et al., 2012). In previous studies, PhaC Ac was modified via evolutionary engineering approaches, and several mutation sites (e.g., asparagine 149, aspartate 171, valine 214, and phenylalanine 518) enhanced the 3HHx polymerization capacity (Amara et al., 2002;Kichise et al., 2002;Tsuge et al., 2004). Furthermore, a double mutant of PhaC Ac , termed the NSDG mutant, which has two amino acid substitutions of asparagine 149 by serine (N149S) and aspartate 171 by glycine (D171G), was generated as a superior enzyme capable of synthesizing P(3HB-co-3HHx) with a higher 3HHx fraction than the wild-type enzyme (Tsuge et al., 2007b). However, since then, no PhaC Ac mutant with further high 3HHx polymerization ability has been created.
The three-dimensional structure of a protein provides important information for understanding its biochemical function and catalytic mechanism. Homology modeling aims to build three-dimensional protein structure models using experimentally determined structures of related family members as templates. Thus, homology modeling is a powerful tool for understanding and predicting the three-dimensional structure of unknown proteins to determine beneficial mutation sites and improve protein properties (Stoilov et al., 1998;Lee et al., 2011). Recently, some research groups have determined the partial crystal structure of R. eutropha PHA synthase (PhaC Re ), which is classified into the same group (class I) as PhaC Ac based on its substrate specificity and subunit structure (Wittenborn et al., 2016;Kim et al., 2017). According to their crystal structures, three active residues, Cys319, Asp480, and His508, in PhaC Re are in close proximity. Additionally, amino acid residues that make up the substrate pocket have been identified (Wittenborn et al., 2016;Kim et al., 2017). Moreover, structural information on the available PHA synthases has been increasing (Chek et al., 2017(Chek et al., , 2019(Chek et al., , 2020. In this study, using a newly constructed homology model of PhaC Ac , three amino acid residues were predicted to be constituents of the substrate pocket and involved in substrate recognition. Based on this prediction, site-specific mutagenesis was conducted on PhaC Ac NSDG to introduce an additional third mutation. The resulting triple mutants were expressed in the strain 005dC1Z126TRCB, an engineered R. eutropha strain, grown on palm kernel oil as a carbon source for PHA biosynthesis. It was found that the triple mutant PhaC Ac NSDG/S389T is capable of synthesizing P(3HB-co-3HHx) with a higher 3HHx fraction than the parental PhaC Ac NSDG. Furthermore, the selected PhaC Ac triple mutants were isolated as PHA granule-associated enzymes from R. eutropha and characterized through enzyme kinetic analysis to understand how the catalytic function changed.
Bacterial Strains and Plasmids
Bacterial strains and gene expression plasmids used are listed in Table 1. All Escherichia coli strains were grown in Luria-Bertani (LB) medium. The E. coli strains JM109 and S17-1 were used for plasmid construction and as donors in the intergeneric conjugation experiments, respectively. All R. eutropha strains were grown in nutrient broth (Difco Laboratories, Detroit, MI, United States).
To delete the phaC Ac NSDG gene in the R. eutropha CnTRCB strain (Arikawa and Matsumoto, 2016a), the gene This study deletion plasmid pNS2X-sacB-phaC1AdS (Sato et al., 2013) was introduced into the CnTRCB strain by conjugation from the donor strain E. coli S17-1. The deletion of phaC was confirmed through PCR. The resulting strain was named 005dC1Z126TRCB, which retained phaA and phaB involved in the 3HB monomer supply and provided greater proportions of 3HHx than the H16 strain, by enhancing the expression of phaJ4b Re .
Homology Modeling of A. caviae PHA Synthase
A template-based modeling method using HyperChem (HYPERCUBE, INC., Gainsville, FL, United States) (Froimowitz, 1993) was used to predict the structure of PhaC Ac using PDB:5T6O (residues 201-589) from PhaC Re as a template.
Plasmid Construction and Site-Directed Mutagenesis
Plasmids expressing wild-type PhaC Ac , the double mutant NSDG, and the triple mutants NSDG-Y318/S389/L436X were constructed based on the pCUP3 vector, which is stably maintained in R. eutropha (Sato et al., 2013). The wild-type phaC Ac (WT-phaC Ac ) and phaC Ac NSDG genes were obtained through PCR with MunI_PhaCAc_F and SpeI_PhaCAc_R as primers, using the plasmid pColdI::phaC Ac and the genomic DNA of the R. eutropha strain KNK005 as a template, respectively (Sato et al., 2013;Ushimaru et al., 2014). These fragments were digested by MunI and SpeI, and then cloned into the same sites of the pCUP3 vector. The P trp fragment, which was amplified by PCR using pKK388-1 (Clontech, Palo Alto, CA, United States) as a template (Arikawa and Matsumoto, 2016a), was digested with MunI and ligated with MunI-digested pCUP3 vectors carrying WT-phaC Ac and phaC Ac NSDG genes to yield pCUP3-P trp -WT-phaC Ac and pCUP3-P trp -phaC Ac NSDG, respectively. Site-directed mutagenesis of phaC Ac NSDG gene was performed by overlap extension PCR (Ho et al., 1989). Reverse primers containing a point mutation were designed as listed in Supplementary Table S1, and primers containing a restriction enzyme site were designed as (pCUP3_IF_MunI_trp_F) 5 -ACA TTGCGCTGAAAGAAGGGCCAATTGTGCTTCTGGCGTC-3 and (pCUP3_SpeI_IF_R) 5 -GCTCGGATCCACTAGTCGGCT GCCGACTGGT-3 (the underlined sequences indicate the MunI and SpeI sites, and the bold sequences indicate in-fusion alignment). Using the corresponding primers in phaC Ac _Y318/S389/L436X_R and phaC Ac _Y318/S389/L436X_F (Supplementary Table S1), the DNA fragments were amplified.
The resulting fragments after one round of PCR were used as templates, and PCR was performed again using the outside primers with MunI and SpeI sites. The resulting phaC Ac NSDG fragments with point mutations were digested using MunI and SpeI, and then inserted into the corresponding restriction sites in the pCUP3 vector. The resulting pCUP3-P trp -NSDG-Y318/S389/L436X plasmids were introduced into an engineered strain of R. eutropha 005dC1Z126TRCB strain, in which phaC gene was disrupted. Transformation was performed through electroporation, as described previously (Sato et al., 2013;Arikawa et al., 2016b).
PHA Accumulation From Palm Kernel Oil
Polyhydroxyalkanoate production was performed in 50 mL of mineral salt (MS) medium (Kato et al., 1996) with 1.29 g/L (NH 4 ) 2 SO 4 and 1.5 w/v% palm kernel oil as a sole carbon source for 72 h. Kanamycin was added to the medium at a concentration of 50 mg/L to maintain the plasmid in the cells. After cultivation, the collected cells were washed with water and ethanol to remove the remaining carbon sources and then lyophilized (Arikawa et al., 2016b). The PHA content in the cells was determined by gas chromatography (GC) after methanolysis of approximately 15 mg of lyophilized cells in the presence of 15% (v/v) sulfuric acid, as previously described (Lakshman and Shamala, 2006).
Kinetic Analysis of the Granule-Associated PHA Synthase
The PHA synthase activity assay was performed, wherein the amount of CoA released was measured using 5,5-dithiobis(2nitrobenzoic acid) (DTNB) with the following modifications: PHA synthase assay was initiated by adding the granuleassociated PhaC Ac , which was obtained from 24 h of R. eutropha culture broth by ultracentrifugation as previously described (Valentin and Steinbüchel, 1994;Harada et al., 2019). After
Analysis of the PHA Synthase Concentration Through Western Blotting
The concentration of the granule-associated PhaC Ac was determined as previously described (Harada et al., 2019), after incubation with rabbit antiserum against a peptide from the C-terminus of PhaC Ac , followed by incubation with a goat anti-rabbit antibody conjugated with horseradish peroxidase (HRP; Santa Cruz Biotechnology, CA, United States). Proteins were visualized using the ECL Plus Western Blotting Detection Reagent (Bio-Rad, Hercules, CA, United States). Data were recorded using a CCD camera FAS-1000 (Toyobo, Osaka, Japan). Quantitative analysis of PhaC Ac concentration on PHA granules was performed using calibration curves prepared using purified PhaC Ac (130-520 ng). Band intensities were quantified using the ImageJ software 1 .
Amino Acid Residues That Determine the Substrate Pocket Size of PhaC Ac
To identify the beneficial mutation site for increasing the 3HHx fraction, a homology model of PhaC Ac was first built by targeting (Figures 2A,B). This is in good agreement with the experimental observation that PhaC Ac has a broader substrate specificity than PhaC Re (Fukui and Doi, 1997). From the comparison of these structural models, two amino acid residues adjacent to the active center (PhaC Ac vs. PhaC Re : Y318 vs. F318, S389 vs. T393) were found to be different. It was presumed that Y318 and S389 determine the depth and width of the substrate pocket of PhaC Ac , respectively. The substrate entrance tunnel of these models was further compared (Figures 2C,D), and additional differences were found (PhaC Ac vs. PhaC Re : L436 vs. Y440). In PhaC Ac , L436 mainly contributes to expanding the substrate entrance tunnel, because there is a significant difference in the amino acid size at the homologous positions in these structural models.
PHA Synthesis by PhaC Ac NSDG With an Additional Y318 Mutation
As the Y318 of PhaC Ac was predicted to determine the depth of the substrate pocket based on the homology model, we investigated the effect of the amino acid size at this position on 3HHx polymerization ability. To replace Y318, we selected Leu, Ile, and Met, which are smaller than Tyr, with the aim of expanding the substrate pocket space. The three PhaC Ac mutants with NSDG mutations and either Y318L/I/M mutations were generated by sitedirected mutagenesis and expressed in the engineered R. eutropha strain 005dC1Z126TRCB to induce P(3HBco-3HHx) biosynthesis from palm kernel oil. The results are presented in Table 2. The strain expressing the wildtype enzyme accumulated 80.3 wt% P(3HB-co-3HHx) of dried cells, with 7.4 mol% of 3HHx fraction. Meanwhile, the strain expressing PhaC Ac NSDG accumulated 85.7 wt% P(3HB-co-3HHx) of dried cells with 13.1 mol% of 3HHx fraction. A very small amount (less than 0.1 mol%) of 3hydroxyoctanoate (3HO) was also detected, which is consistent with previous study (Tsuge et al., 2007b). PhaC Ac NSDG was confirmed to have the ability to synthesize P(3HB-co-3HHx) with a higher 3HHx fraction than the wild-type enzyme. Compared to NSDG and NSDG/Y318X, a slight increase in the 3HHx fraction was observed in the strain expressing the NSDG/Y318I mutant, whereas the other two strains showed a considerable decrease in the 3HHx fraction. As for the NSDG/Y318L mutant, it showed a slight increase (0.8 mol%) in the 3HO fraction. On the contrary, expression of the NSDG/Y318I mutant notably decreased polymer accumulation (11.7 wt%) in the cells compared to the parental NSDG strain (85.7 wt%). Thus, additional mutagenesis of Y318 was not beneficial.
PHA Synthesis by PhaC Ac NSDG With an Additional S389 Mutation
S389 in PhaC Ac contributes to cavity formation near the active center. It is homologous to T393 in PhaC Re , and the cavity space in PhaC Ac is larger due to the volume of one methyl group. To further expand the cavity space, the amino acid residue at position 389 was replaced with Ala (S389A), which is a smaller amino acid.
To examine the opposite effect on the amino acid size, this residue was also replaced with the larger amino acid Thr (S389T) with the aim of narrowing the space. The two PhaC Ac mutants with NSDG mutations and either S389A/T mutations were generated by site-directed mutagenesis and evaluated for P(3HB-co-3HHx) biosynthesis. The results are presented in Table 3. The additional S389A mutation did not alter the 3HHx fraction. However, the S389T mutation in PhaC Ac NSDG increased the 3HHx fraction to 14.9 mol% without a significant decrease in PHA yield. Since the 3HHx fraction increased due to replacement with the bulkier amino acid in the mutant, further replacements were conducted using Val, Leu, Ile, and Cys, which have bulkier side chains than Ser based on their van der Waals volumes (Darby and Creighton, 1995;Tsuge et al., 2009). As a result, a slight increase in the 3HHx fraction up to 13.8 mol% was observed by introducing S389V/L/I/C mutations in PhaC Ac NSDG. Of the mutations tested, the S389T mutation was the most effective in increasing the 3HHx fraction, followed by S389V. It was found that mutagenesis at position 318 in PhaC Ac may enhance the 3HHx polymerization ability, although replacement with relatively bulky amino acids was effective.
PHA Synthesis by PhaC Ac NSDG With Additional Mutation for L436
L436 is an amino acid located slightly outside the active center, which corresponds to Y440 in PhaC Re . As predicted by homology modeling, the cavity of PhaC Ac is larger than that of PhaC Re because of the difference in the amino acid side size at this position. To examine the effect of mutagenesis for L436 on the 3HHx polymerization ability of PhaC Ac NSDG, sitedirected mutagenesis was performed. To examine the expanding effect of the pocket space, L436A/V mutations were introduced into PhaC Ac NSDG. In addition, L436Y/I mutations were introduced to examine the opposite narrowing effect (Darby and Creighton, 1995;Tsuge et al., 2009). The results are listed in Table 4. PHA accumulation was observed for all strains with polymer contents greater than 80 wt%. However, these mutations showed a decrease in the 3HHx fraction; The L436A and L436Y mutations showed 21% and 66% reductions in the 3HHx fraction, respectively, when compared to the parental NSDG strain. Based on this observation, the residue at position 436 may be involved in substrate recognition, but mutagenesis at this position did not result in an increase in the 3HHx fraction of the polymer.
Kinetic Analysis of PhaC Ac NSDG With S389V/T/C Mutations
To obtain a better understanding of the polymerization ability of the 3HHx monomer of PhaC Ac , granule-associated PHA synthases were prepared and used for enzyme kinetic analysis. The granule-associated PHA synthase does not exhibit a lag phase (Gerngross et al., 1994;Taguchi et al., 2002) because the enzyme is already activated and thus is suitable for use in accurate kinetic analysis. To determine the PhaC Ac concentrations on the surface of the isolated PHA granules, western blotting was performed using an antibody against PhaC Ac . The kinetic parameters determined for wild-type PhaC Ac , NSDG mutant, and NSDG/S389X mutants are listed in Table 5. The NSDG mutant and NSDG/S389X showed a lower Michaelis constant (K m ) for the R-3HHx-CoA substrate than the wild-type PhaC Ac but was not significant for the R-3HB-CoA substrate. In addition, the NSDG mutant and NSDG/S389X mutants showed a higher turnover number (k cat ) for both substrates than the wild-type PhaC Ac , except for NSDG/S389V toward R-3HHx-CoA. Kinetic analysis revealed that the substrate affinity and turnover number, especially for R-3HHx-CoA, increased in the NSDG mutant. Among the mutants tested, the K m values of S389V/C mutants for R-3HHx-CoA, which were 0.46 mM and 0.53 mM, respectively, showed smaller values than that of the parental NSDG strain (0.73 mM). The decrease in K m value indicates the increased affinity between enzyme and substrate, thus providing evidence of the reinforced ability of 3HHx polymerization by these mutations. In contrast, by introducing the S389T mutation into PhaC Ac NSDG, the K m value slightly increased for both R-3HB-CoA and R-3HHx-CoA. Furthermore, the k cat value significantly increased for both substrates by up to 3.4-fold compared to the parental NSDG enzyme. Thus, the increase in the 3HHx fraction caused by the S389T mutation could be attributed to the increased catalytic turnover of the enzyme, rather than the increased affinity between the substrate and the enzyme.
DISCUSSION
This study aimed to increase the 3HHx fraction in P(3HBco-3HHx) by engineering PhaC Ac . Based on evolutionary engineering, we had already generated a PhaC Ac NSDG mutant as a workhorse to synthesize a high 3HHx-fraction polymer. The mutation positions of NSDG are at the N-terminal region of PhaC Ac , and these amino acid residues are predicted to not be involved in the formation of the substrate pocket. Thus, to further modify the PhaC Ac NSDG for higher 3HHx-fraction polymer synthesis, we attempted to change the cavity space of the substrate pocket by replacing certain amino acids. Recently, two research groups have published the partial crystal structure of PhaC Re (Wittenborn et al., 2016;Kim et al., 2017). PhaC Re can polymerize up to C5 monomers, whereas PhaC Ac is capable of polymerizing up to C6 monomers. The difference in substrate specificity may be caused by the size of the substrate pocket near the active center . From this viewpoint, the three-dimensional structures around the cavity pocket space of PhaC Re and the homology model of PhaC Ac were compared, mainly focusing on the difference in the spread of amino acid side chains. As possible determining residues for the pocket size of PhaC Ac , three amino acids, namely Y318, S389, and L436, were identified in this study.
Our homology model suggests that Y318 may be an important residue that determines the pocket size ( Figure 2B). Interestingly, this position is Ala in PHA synthases from Pseudomonas spp. (class II) that can polymerize MCL monomers up to C14. Therefore, it is reasonable to hypothesize that a mutation at this position has a significant influence on the pocket depth. The amino acid at this position in PhaC Re (F318) has been suggested to stabilize the structure of the substrate pocket . Indeed, mutagenesis at this position of PhaC Re led to a decrease in 75% of the synthase activity . In our study, mutation of Y318 of PhaC Ac also resulted in a significant reduction in polymer synthesis ( Table 2). Y318 maintains the structure of the substrate pocket and is strongly related to the polymerization ability in the same manner as PhaC Re .
The docking simulation using the crystal structure of PhaC Re suggested that Y440 is located in the substrate entrance tunnel and contributes to the structural stabilization of the β-mercaptoethylamine/pantothenate (β-MP) moiety of R-3HB-CoA . Y440 stabilizes the substrate orientation by interacting with neighboring amino acids to efficiently catalyze the polymerization reaction. In PhaC Ac , the corresponding L436 was considered to regulate the space of the substrate entrance tunnel based on the homology model ( Figure 2D). In fact, mutagenesis of L436 limited the substrate specificity of PhaC Ac and reduced the 3HHx fraction in the biosynthesized polymer (Table 4). Among the NSDG/L436X mutants examined, the most remarkable reduction in the 3HHx fraction was observed for the NSDG/L436Y mutant, probably due to the narrowest pocket space by replacement with the largest amino acid Tyr.
However, the effect of 3HHx polymerization ability cannot always be explained by the reduction and expansion of pocket space due to amino acid replacement. In this study, we found that the 3HHx fraction in PHA increased after narrowing the substrate pocket by mutagenesis of S389 (Table 3). However, this observation was opposite to our hypothesis.
To better understand the effect of S389 mutagenesis, the kinetics of the enzymes with the S389X mutation were investigated. Kinetic analysis provided new information on the changes in catalytic function due to S389X mutations. It was revealed that substrate affinity for R-3HHx-CoA was increased by S389V/C mutations, whereas the catalytic turnover of the enzyme was increased by the S389T mutation. Thus, the increase in the 3HHx fraction caused by the S389T mutation may be due to the increased catalytic turnover of the enzyme, rather than the change in binding affinity between the enzyme and substrate. The relationship between pocket size narrowing and 3HHx polymerization ability may be explained by stabilization of the substrate orientation when the substrate accesses the active site. The proper orientation of the substrate may increase the efficiency of the catalytic reaction. However, further studies are required to elucidate the underlying mechanisms of mutagenesis.
CONCLUSION
In conclusion, by comparing the substrate pocket structures of PhaC Re and PhaC Ac , a new beneficial mutation position at S389 was found to enhance the 3HHx polymerization ability of PhaC Ac NSDG. Since the discovery of the NSDG mutation, additional mutations conferring a superior ability of 3HHx polymerization have not been found by an evolutionary engineering approach. Thus, this is a successful example of PHA synthase engineering by effectively exploiting the findings from the three-dimensional structure of proteins.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. | 5,698 | 2021-03-03T00:00:00.000 | [
"Biology",
"Engineering"
] |
Development and Validation of a Novel Model to Predict Liver Histopathology in Patients with Chronic Hepatitis B
It is still vague for chronic hepatitis B (CHB) patients with normal or mildly increasing alanine aminotransferase (ALT) level to undergo antiviral treatment or not. The purpose of our study was to establish a noninvasive model based on routine blood test to predict liver histopathology for antiviral therapy. This retrospective study enrolled 258 CHB patients with liver biopsy from the First Hospital of Quanzhou (training cohort, n=126) and Huashan Hospital (validation cohort, n=132). Histologic grading of necroinflammation (G) and liver fibrosis (S) was performed according to the Scheuer scoring system. A novel model, ATPI, including aspartate aminotransferase (AST), total bilirubin (TBil), and platelets (PLT), was developed in training cohort. The area under ROC curves (AUC) of ATPI for predicting antiviral therapy indication was 0.83 in training cohort and was 0.88 in the validation cohort, respectively. Similarly, ATPI also displayed the highest AUC in predicting antiviral therapy indication in CHB patients with normal or mildly increasing ALT level. In conclusion, ATPI is a novel independent model to predict liver histopathology for antiviral therapy in CHB patients with normal and mildly increased ALT levels.
Introduction
Hepatitis B virus (HBV) infection is a global public health problem that can induce liver histopathology and may subsequently lead to the development of liver cirrhosis (LC) and hepatocellular carcinoma (HCC) [1]. Antiviral therapy is the first line for chronic hepatitis B (CHB) patients to prevent disease progression [2,3].
CHB patients with elevated ALT levels >2 upper limit of normal (ULN) should be considered to initiate antiviral treatment. However, ALT level is affected by many factors, such as infection, alcohol, drugs, and hemolytic heart disease. In some patients, ALT level is not correlated with hepatitis B infection, significant liver necroinflammation, or fibrosis. Meanwhile, patients with persistently normal or mild ALT increase (<2ULN) may exist significant necroinflammation and fibrosis. The degree of liver necroinflammation or fibrosis is essential for decisions on antiviral treatment [4] and ALT is insufficient to predict it faithfully. Liver biopsy (LB), the golden standard to evaluate liver necroinflammation or fibrosis [5], is not widely used in clinic [6]. FibroScan is also less readily available, especially in resource-limited settings, and less useful in liver necroinflammation diagnosis [7,8]. A new noninvasive method to predict liver necroinflammation or fibrosis is essential for antivirus therapy evaluation to promote precise treatment.
Recently, serum biomarkers, including aspartate aminotransferase (AST) to platelet (PLT) ratio index (APRI) [9], Fib-4 (based on age, ALT, AST, and PLT) [10], and -glutamyl transpeptidase (GGT) to platelet (GPR) [11], have 2 BioMed Research International been reported to effectively and accurately predict significant fibrosis and cirrhosis in patients with CHB and/or hepatitis C. However, these biomarkers were mainly based on patients with hepatitis C or with HCV/HIV coinfection and might produce inconsistent results in CHB patients. And the clinical heterogeneity postulates the application of these biomarkers in CHB patients. Recently, several models in CHB patients have been reported to predict liver fibrosis or cirrhosis but not liver histopathology for commencing antiviral therapy evaluation. And one noninvasive model, to predict liver histopathology for commencing antiviral therapy, was constructed in HBeAg-positive CHB patients with ALT ⩽2 ULN [12], which is not representative of the general CHB patients in China.
This study aims to develop a novel method predictive index based on routine blood tests to predict liver histopathology for commencing antiviral therapy in Chinese CHB patients. We compared diagnostic accuracy of noninvasive biomarkers in a Chinese CHB cohort, then to develop and validate a novel predictive ATPI index based on routine blood tests, including AST, PLT, and total bilirubin (TBil), to predict liver histopathology for commencing antiviral therapy in patients with CHB.
Patients.
We conducted an analysis of a retrospective cohort study at First Hospital of Quanzhou, Fujian Medical University (training cohort) from 1994 to 2008, and an independent cohort study at Huashan Hospital, Fudan University (validation cohort), from 2006 to 2016 using the same criteria. All patients showed evidence of hepatitis B surface antigen (HBsAg) that persisted for >6 months, which defined as CHB [3,4,13]. The exclusion criteria are that (1) the patients coinfected with viral hepatitis (HAV, HCV, HDV, and HIV) (2) and patients had heart disease, thyroid disease, and kidney disease and had antivirus therapy. A total of 467 CHB patients were enrolled in the study. Figure 1 summarizes the flow diagram of the study population. Two hundred and nine patients were excluded due to companying HCC (n=16) and alcoholic liver diseases (n=24) and insufficient laboratory data (n=169). The final study population consisting of 258 patients was divided into a training group (Quanzhou cohort, n=126) and a validation group (Shanghai cohort, n=132). The demographic, biochemical, and histological characteristics of all CHB patients in two groups are shown in Table 1. All trials had been approved by the Ethics Committee of First Hospital of Quanzhou and Huashan Hospital.
Liver Biopsy.
Percutaneous LB under ultrasound guidance was performed using disposable needle (Manan Super-Core, Medical Device Technologies co., LTD, Gainesville, Florida, USA) for training cohort and using a 16G needles (MAX-CORE5 MC1616, BARD5 Peripheral Vascular, Inc., USA) for validation cohort. Liver samples with less than a minimum length of 1.5 cm were poor biopsy samples and were excluded from the study. The specimens were formalin-fixed, paraffin-embedded, and stained with H&E for histological analysis. Histologic grading of necroinflammation (G0-G4) and staging of liver fibrosis (S0-S4) were performed according to the Scheuer scoring system [14] by specialized pathologists. G⩾2 and S⩾2 were considered to indicate moderate/severe inflammation and moderate/severe fibrosis, respectively. According to the AASLD [3], EASL [4], and APASL [13] practice guidelines, patients with necroinflammation ≥G2 or fibrosis ≥S2 need antiviral therapy.
Statistical
Analysis. The data were analyzed using SPSS 13.0 (SPSS Inc., Chicago, IL, USA) and MedCalc5 15.8 (MedCalc Software BVBA, Ostend, Belgium). The data are expressed as the median (interquartile ranges). Differences between groups were compared using Mann-Whitney nonparametric U test for continuous variables, and using chisquare test for categorical variables. Correlation was analyzed by the Spearman's rank correlation coefficient. Univariate and multivariate logistic regression were used to develop a new model for predicting G⩾2 or S⩾2. The receiver operating characteristic (ROC) curve was used to evaluate the diagnostic performance of this new model and other indexes, and the results were expressed as a hazard ratio with 95% confidence interval (95% CI). The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated to explore the best cutoff value. P-values<0.05 were considered as statistically significant. (Table 2). There were no differences among the other indicators (all P>0.05).
Predictors and Regression
Step-forward multiple regression analysis all revealed that TBil (P=0.035), AST (P=0.01), and PLT (P=0.037) were independently correlated with antiviral therapeutic indications. The final multiple regression model incorporating TBil, AST and PLT was ATPI model= 0.054×AST (g/L) + 0.09×TBil ( mol/L) -0.008× PLT (10 9 /L) -0.366. The new ATPI model and its component factors (TBIL and AST) progressively increased with the stage ascending of liver necroinflammation or fibrosis (Figures 2(a), 2(b), and 2(d)), in contrast to PLT levels (Figure 2(c)). Furthermore, ATPI model was strongly positively associated with liver inflammation (r=0.472, P<0.001) and liver fibrosis (r=0.417, P<0.001). The median value for ATPI model in nonantiviral group (0.618) was significantly lower than that in antiviral group (2.128) ( Figure 2). Therefore, the new ATPI model based on TBil, AST, and PLT levels may be a good independent indicator to reflect the degree of liver necroinflammation and fibrosis.
Discussion
In this study, a noninvasive model (named as ATPI, composed of AST, TBil, and PLT) was established to predict antiviral therapy indication in CHB patients. Finally, 95.45% (63/66) patients in the training cohort, 95.65% (66/69) in the validation cohort, in other words 95.56% (129/135) in the entire cohort can be properly evaluated for antivirus therapy, with a best cutoff value 1.53. Therefore, the ATPI might be a potential efficient noninvasive model to determine whether to initiate antiviral treatment in CHB patients.
Many noninvasive models have been established recently to estimate liver fibrosis or cirrhosis with high accuracies, indicating the urgent requirement in clinic. However, few of them were further identified as good predictors for antivirus therapy indication due to Various known and unknown causes. APRI was first proposed by Wai CT et al. in 2003 in patients with chronic hepatitis C and written in several authoritative clinical practice guidelines. However, it had limited diagnostic accuracy in CHB cohort [9]. Also, Fib-4 can accurately differentiate mild to moderate fibrosis from fibrosis and cirrhosis in patients coinfected with HIV/HCV [10]. By contrast, GPR was firstly considered as a more accurate marker than APRI and Fib-4 to stage liver fibrosis in patients with chronic HBV infection in West Africa [11]. In our another cohort, GPR also had been demonstrated with relatively higher accuracy in diagnosing liver fibrosis and cirrhosis compared to other established noninvasive [15]. Therefore, it is necessary to know whether these noninvasive models are adapted to predict antiviral therapy indication.
In this study, we summarized the diagnostic accuracy of the above noninvasive markers and found that APRI, GPR, and Fib-4 also could be applied to predict liver histopathology for initiating antiviral therapy, although they had modest predict accuracy. To improve the prediction of antiviral therapy indication, a novel model, named as ATPI, was developed with TBil, AST and PLT count, which were all reported to be related with liver histopathology [9][10][11]16]. In our training cohort, ATPI displayed the highest AUC value in predicting antivirus therapy indication, although there were no significantly difference between ATPI and APRI or GPR or HBeAg (+) model, the sensitivity and specificity of which in predicting antivirus therapy indication were 63.64% and 92.59%, with a best cutoff value 1.53. The similar results were seen in the validation cohort.
We also determined whether ATPI could be used for differentiating antiviral or nonantiviral therapy set in patients with normal and mildly increased ATL. In training cohort, more than 50% patients with normal or mildly increased ALT level had significantly necroinflammation and fibrosis, which similar with the results of previous studies [17]. Similarly, ATPI also perform as greatly as in total patients from training and validation cohort, displaying the highest AUC in predicting antivirus therapy indication in patients with normal or mildly increasing ALT level.
There are some limitations in the present study. First, there were demographic, biochemical, and histological differences between these two cohorts, which may lead to different results in two groups. Second, we were not able to consider other laboratory variables of potential interest in CHB, such 8 BioMed Research International as HBV virology index (HBsAg, HBeAg, and anti-HBc) due to the data which were also not always fully complete.
Conclusions
Our study showed that ATPI is a novel independent indicator for predicting antiviral therapy indication, especially in patients with normal and mildly increased ALT levels. By applying the predefined cutoffs, most patients can be correctly classified as needing antivirus therapy or not. Thus, ATPI is a good surrogate marker for LB to determine whether to use antiviral treatment.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2,623.6 | 2019-02-27T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analysing the Impact of Scaling Out SaaS Software on Response Time
When SaaS software suffers from the problem of response time degradation, scaling deployment resources that support the operation of software can improve the response time, but that also means an increase in the costs due to additional deployment resources. For the purpose of saving costs of deployment resources while improving response time, scaling out the SaaS software is an alternative approach. However, how scaling out software affects response time in the case of saving deployment resources is an important issue for effectively improving response time. (erefore, in this paper, we propose a method analysing the impact of scaling out software on response time. Specifically, we define the scaling-out operation of SaaS software and then leverage queueing theory to analyse the impact of the scaling-out operation on response time. According to the conclusions of impact analysis, we further derive an algorithm for improving response time based on scaling out software without using additional deployment resources. Finally, the effectiveness of the analysis’s conclusions and the proposed algorithm is validated by a practical case, which indicates that the conclusions of impact analysis obtained from this paper can play a guiding role in scaling out software and improving response time effectively while saving deployment resources.
e Motivation and Research Challenges.
Response time degradation is one of the software performance issues that SaaS (Software-as-a-Service) providers may encounter due to changes in the runtime environment of the software. As a new software delivery model, SaaS software is composed of a series of atomic services, which are usually deployed on distributed deployment resources and together perform the function of the software [1,2]. When response time degradation occurs, for the sake of ensuring the efficient operation of the software, SaaS providers usually solve this problem by adopting scaling techniques, for instance, scaling up the deployment resources (i.e., upgrading the configurations of current deployment resources where the software is deployed) or scaling out the deployment resources (i.e., providing more deployment resources for the software). However, that will also result in more costs due to renting additional deployment resources.
For the purpose of saving costs of deployment resources, scaling out SaaS software (i.e., adding more instances of services in software) without using additional deployment resources is another approach to improve the response time. Nevertheless, that may not be an easy task for SaaS providers. In such a situation, the new service instances generated by scaling out the software may have to be deployed on existing deployment resources, and then, resource contention will occur between these service instances, which may have a negative impact on response time. If SaaS providers do not know the appropriate strategies for scaling out software with existing deployment resources, the response time of the software may not be effectively improved, which may also increase the burden of human operation. erefore, how to scale out SaaS software to effectively improve the response time without increasing the cost of additional deployment resources becomes a challenging issue for SaaS providers.
e Proposed Solution.
To address the issue mentioned above, our solution is to study the impact of scaling out SaaS software on response time without using additional deployment resources, in order to provide guidance for SaaS providers to scale out software and improve response time effectively. For this purpose, a method for performance analysis is necessary. Queueing theory has been widely applied in performance analysis [3,4]. Since queueing theory is a mathematical analysis method, the performance metrics of the software (for instance, response time) can be analysed in combination with deployment resources and solved with mathematical formulas quantitatively and efficiently, which can contribute to subsequently analysing the impact of scaling out software on response time and making decisions for scaling-out operations [5]. erefore, in this paper, we use queueing theory for our study. In brief, the key contributions of the paper are summarized as follows: (i) We define the scaling-out operation of SaaS software and then analyse the positive and negative influences of scaling-out operations on response time of the software, which gives the direction of scaling out SaaS software (ii) According to the analysis conclusions, we put forward an algorithm of scaling out software without using additional deployment resources, which is helpful in effectively improving response time of the software while saving deployment resources (iii) Aiming to verify the effectiveness of the analysis conclusions and the proposed algorithm, we set up experiments on a practical case e rest of this paper is organized as follows. Section 2 describes the background and problem statement of this paper. Section 3 introduces the process of analysing the impact of scaling out SaaS software on response time. Section 4 presents a practical case verifying the effectiveness of the analysis conclusions obtained from our study. e discussion of our study is presented in Section 5. Section 6 summarizes the related work, and our work is concluded in Section 7.
Background and Problem Statement
To facilitate the description of scaling out SaaS software and the analysis of the response time of software in combination with deployment resources, in this section, first, we will introduce the concepts of SaaS software, deployment scheme, and deployment resources mentioned in this paper. Based on these concepts, we will give a description of the problem to be addressed in this paper.
SaaS Software.
e SaaS software consists of different types of atomic services based on specific logical interaction relations. Figure 1 shows an example of SaaS software, which is composed of five types of atomic services (S 1 ∼S 5 ). For ease of explanation, we use a tree structure to describe the interaction relationship between atomic services (shown in Figure 2) [6]. In Figure 2, the nodes which are denoted as "seq.," "branch," and "loop" represent sequential execution, branch execution, and loop execution, respectively. If the label of the node is "branch," the value on the subsequent arc means the invocation probability of the corresponding branch, while if the label of the node is "loop," the value on the subsequent arc represents the execution times of the subsequent node.
Deployment Scheme and Deployment Resources.
A deployment scheme describes the deployment resources that support the operation of SaaS software and the allocation of the service instances in SaaS software to deployment resources. Figure 3 describes a possible deployment scheme for the SaaS software in Figure 1. e deployment resources in this deployment scheme are four deployment nodes, which are marked as n 1 , n 2 , n 3 , and n 4 from top to bottom. Each node is a virtual machine (VM), and the value in the parenthesis means the VM type of node. Each service in SaaS software has one or more instances deployed on different deployment nodes. In this deployment scheme, there is one instance of service S 1 and S 2 , respectively, and two instances of service S 3 , S 4 , and S 5 , respectively. For instance, the instance of S 3 on n 2 is denoted as s 3,2 , and the instance of S 3 on n 4 is denoted as s 3,4 .
Problem Statement.
To facilitate the problem statement, first, we give the definition of the scaling-out operation of SaaS software as follows.
Definition 1 (Scaling-Out Operation of SaaS Software). e operation of scaling out SaaS software is defined as a fivetuple SO � D init , SS, S i , Act(s i,t , n t ), D new , where we have the following: 2 Scientific Programming SS is the SaaS software S i is one of the services that constitute SS s i,t is the new instance of S i generated after the scalingout operation n t is a deployment node in D init Act(s i,t , n t ) is a function of the scaling-out operation, which generates the new instance s i,t and deploys s i,t on n t D new is the new deployment scheme generated after the scaling-out operation According to Definition 1, the process of scaling out software can be regarded as a series of scaling-out operations. Let the response time of SS before scaling-out operations be R init (SS), and let the response time after scaling-out operations be R new (SS). e problem and goal of this study are to analyse the impact of scaling-out operations on response time of SaaS software and find an effective approach to obtain a better response time R new (SS) comparing with R init (SS) through scaling-out operations without additional deployment nodes.
Impact Analysis of Scaling Out SaaS Software on Response Time
For the purpose of analysing the impact of scaling out SaaS software on response time, an approach to evaluate SaaS software response time is needed. In this section, first, we will briefly introduce the evaluation method of response time by leveraging a queueing performance model. With the evaluation method, we will focus on analysing the impact of scaling-out operations on response time of the software. Finally, based on the analysis conclusions, we will further derive the algorithm of scaling out software for response time improvement. e flow of our work is also shown in Figure 4.
Evaluation of SaaS Software Response Time.
Since the SaaS software consists of a series of services that are deployed on multiple deployment nodes, the response time of SaaS software can be calculated by aggregating the response time of each service on each deployment node [7]. Let n k be a deployment node, and the VM type of n k is v n k whose computing power is CP(v n k ). S i is any service that composes the SaaS software SS, and there is at least one instance of service S i deployed on n k (denoted as s i,k ). According to the Utilization Law [8], the utilization of service instance s i,k to its deployment node n k can be calculated by the throughput of s i,k and the actual time requirement of s i,k processing a request on n k as follows: where T(n k , s i,k ) is the throughput of instance s i,k , T is the throughput of SS, q i is the invocation probability of service S i , p i,k is the workload proportion of instance s i,k , i.e., the proportion of the workload assigned for s i,k to the workload of S i , and TR(S i ) is the time requirement of S i on a node with unit computing resource. en, the utilization of n k (denoted as U(n k )) can be calculated by the sum of the utilization of each service instance deployed on n k : where DS(n k ) is the service type set of service instances that are deployed on n k . According to existing research on performance based on queueing theory, each deployment node n k can be modelled as an M/M/1 queue with a service centre that has CP(v n k ) computing power and serves the requests in the processorsharing (PS) discipline, and the interarrival time of requests and the service time requirement follow exponential distribution [9,10]. en, the response time of each service instance s i,k on n k (denoted as R(n k , s i,k )) can be calculated from the request arrival rate of s i,k on n k (denoted as λ(n k , s i,k )) and the average request processing rate of s i,k on n k (denoted as μ(n k , s i,k )) as follows [11]: where λ(n k , s i,k ) is approximately equal to T(n k , s i,k ) in the balanced status and μ(n k , s i,k ) can be calculated by dividing the amount of computing resources of n k available to the service instance s i,k by the time requirement of service S i processing a request with unit computing resource as follows: erefore, according to equations (1), (2), and (4), equation (3) can be expressed as follows: Scientific Programming 3 en, the response time of service S i (denoted as R(S i )) can be calculated according to the response times of the corresponding service instances as follows: where DN(S i ) is the set of deployment nodes on which the instances of S i are deployed. Finally, the response time of the entire software can be calculated by aggregating the response time of each service with the corresponding invocation probability as follows: where N is the total number of deployment nodes. After scaling out the software, the response time of the software in the newly generated deployment scheme can also be calculated by equation (7). erefore, based on equation (7), we can further analyse the relationship between scalingout operations of software and variation of response time in the next subsection.
Analysing the Variation of Response Time with the Scaling-Out Operation.
For ease of description, the following definitions are given first.
Definition 2 (Response-Time Contribution of Service Instance). Let s i,k (deployed on deployment node n k ) be an instance of service S i in SaaS software SS, and then, the response time of s i,k is R(n k , s i,k ), which is called the response time contribution of service instance s i,k to response time of SS.
Definition 3 (Response-Time Contribution of the Deployment Node). Let DS(n k ) be the service type set of service instances deployed on deployment node n k , and then, the response time contribution of deployment node n k (denoted as R(n k )) means the sum of the response time contribution of each service instance on n k , which can be calculated as follows: Based on Definition 3, the response time of the SaaS software can be seen as the aggregation of the response time contribution of each deployment node in a deployment scheme, as well as the response time after scaling out the software. Assume that, in deployment scheme D init , service S i has n instances deployed on nodes n k1 , n k2 , . . . , n km , . . . , n kn , respectively (the instances are denoted as s i,k1 , s i,k2 , . . . , s i,km , . . . , s i,kn , correspondingly, and we call them existing instances of S i ). Now, we perform a scaling-out operation according to Definition 1, and then, a new service instance of S i is deployed on node n t (this instance is denoted as s i,t ) and a new deployment scheme D new is generated. Comparing the response time contribution of nodes in D new with that in D init , we can see that only the response time contribution of the nodes on which the instances of S i are deployed are varied, i.e., the impact of the scaling-out operation on response time is just from the variation of response time contribution of n k1 , n k2 , . . . , n km , . . . , n kn and n t . Hence, in the following part, we will focus on analysing the impact of the scaling-out operation on response time contribution of these nodes.
e Impact of the Scaling-Out Operation on Response
Time Contribution of Nodes n k1 , . . . , n km , . . . n kn . In deployment scheme D init , based on equation (8), the response time contribution of any of these nodes n km (denoted as R init (n km )) can be calculated as follows: where S u represents the service type of instance deployed on node n km including S i , and equation (9) can be expressed as follows: where S o represents the service type (other than S i ) of instance deployed on node n km , q o is the invocation probability of S o , p o,km is the workload proportion of instance of S o deployed on n km , and TR(S o ) is the time requirement of S o . After deploying s i,t on n t , since in deployment scheme D new the instance number of service S i changed from n to n + 1, the workload proportion of instance s i,km on n km (i.e., p i,km ) will decrease (we denoted it as p i,km ′ in D new ): erefore, similar to equation (10), in D new the response time contribution of node n km (denoted as R new (n km )) can be expressed as follows: Comparing equation (12) with equation (10) and according to equation (11), we can obtain the conclusion that R new (n km ) < R init (n km ), which means that the response time contribution of node n km will decrease after the scalingout operation.
Similarly, we can obtain the conclusion that the response time contribution of nodes n k1 , n k2 , . . . , n km , . . . , n kn will all decrease after the scaling-out operation.
We combine the example of SaaS software shown in Figure 1 and the initial deployment scheme shown in Figure 3 to further explain the abovementioned analysis conclusion. For instance, we consider a scaling-out operation SO � D init , SS, S 5 , Act(s 5,1 , n 1 ), D new , i.e., adding an instance for service S 5 (s 5,1 ) and deploying it to node n 1 through the scaling-out operation SO. Since before the scaling-out operation there exist two instances of S 5 , i.e., s 5,2 on node n 2 and s 5,4 on node n 4 , according to the abovementioned analysis, the response time contribution of node n 2 and n 4 will decrease after the scaling-out operation.
e Impact of the Scaling-Out Operation on Response
Time Contribution of Node n t . Based on equation (8), in deployment scheme D init , the response time contribution of node n t (denoted as R init (n t )) can be calculated as follows: After deploying s i,t on n t , the response time contribution of node n t in D new (denoted as R new (n t )) can be calculated as follows: Scientific Programming 5 Comparing equation (14) with (13), we can obtain the conclusion that R new (n t ) > R init (n t ), which means that the response time contribution of node n t will increase after the scaling-out operation.
We also combine the example in Figure 1 to further explain the abovementioned analysis conclusion. We still consider the scaling-out operation SO � D init , SS, S 5 , Act(s 5,1 , n 1 ), D new }. Since the operation adds a new instance s 5,1 to node n 1 , according to the abovementioned analysis, the response time contribution of node n 1 will increase after the operation.
To conclude, we can see that a scaling-out operation will decrease the response time contribution of the nodes where the instances of S i already exist before the operation, which makes a positive influence on the response time improvement of the software, while it will also increase the response time contribution of the node where the new instance of S i is deployed, which makes a negative influence on the response time improvement of the software. If a SaaS provider can perform a series of scaling-out operations to make the positive influence larger than the negative influence, the response time of the software can be effectively improved.
In the next subsection, we will further give an approach of scaling out software for response time improvement based on the abovementioned analysis conclusions.
Response Time Improvement Based on Scaling Out
Software. According to the analysis in Section 3.2, it can be seen that scaling out SaaS software without additional deployment resources will lead to both positive and negative influences on response time improvement of SaaS software. erefore, to effectively improve the response time by scaling out software, we should determine a series of appropriate scaling-out operations that can make a positive influence on the response time of the software larger than the negative influence. For ease of description, first, we give the following theorem.
in which CP(v n km ) and CP(v n t ) are the computing power of node n km and n t , respectively, U D init (n km ), U D init (n t ), U D new (n km ), and U D new (n t ) are utilizations of node n km and n t in D init and D new , respectively, p i,km and p i,km ′ are the workload proportions of s i,km in D init and D new , respectively, and p i,t is the workload proportion of s i,t in D new .
Proof. Let R init (SS) and R new (SS) be the response time of SS in D init and D new , respectively. When the response time is improved after the scaling-out operation, the following inequality should be satisfied: Let R init (n km ) and R new (n km ) be the response time contribution of node n km in D init and D new , respectively, and let R init (n t ) and R new (n t ) be the response time contribution of node n t in D init and D new , respectively. According to the analysis conclusions in Section 3.2, since the response time contribution of nodes other than n k1 , . . . , n km , . . . n kn and n t is constant, inequality (16) can be transformed into where R c represents the sum of response time contribution of all nodes except n k1 , . . . , n km , . . . n kn and n t . Based on equations (2), (9), and (12)- (14), inequality (17) can be transformed into Let en, we can get the conclusion that the response time will be improved after the scaling-out operation if ΔJ(SO) > 0 is satisfied. erefore, the theorem holds.
We further illustrate eorem 1 with the example in Figure 1 and the scaling-out operation SO � D init , SS, S 5 , Act(s 5,1 , n 1 ), D new }. According to eorem 1, we can calculate the judgment metric ΔJ(SO) by equations (2) and (15). If ΔJ(SO) > 0, it indicates that the scaling-out operation SO can make a positive influence on the response time improvement larger than the negative influence, i.e., SO is a feasible operation which can improve the response time of the software effectively, while if ΔJ(SO) ≤ 0, it indicates that SO cannot improve the response time of the software.
Hence, for the given SaaS software, we perform a series of scaling-out operations that satisfy eorem 1, and then, the response time of the software can be effectively improved. Based on the abovementioned analysis conclusions, Algorithm 1 presents the main process of scaling out SaaS software for improving response time.
e algorithm searches multiple feasible scaling-out operations in order to constitute an operation series. Besides, the algorithm will find a relatively optimal operation series through multiple iterations.
Experiments and Analysis
In this section, we conduct experiments with a practical case to verify the theoretical analysis conclusions in the previous section.
Experimental Setup.
e experimental case is a travel planning system, which is composed of several types of services (the structure is shown in Figure 5). Service S 1 counts the most popular places. Service S 2 , S 3 , and S 4 recommend a set of travel places for customers according to different conditions. Service S 2 selects travel places within the most popular places based on the acceptable price of customer. Service S 3 selects travel places based on the place popularity. Service S 4 selects travel places both considering the price and the popularity of places. e invocation probabilities of S 2 , S 3 , and S 4 are 0.3, 0.3, and 0.4, respectively. Service S 5 plans the best travel route for the trip, and service S 6 arranges the luggage for customers. e deployment nodes for the experimental case are provided by an experimental environment implemented by the server equipped with Intel Core i5-6500 CPU @ 3.60 GHz and 16 GB DDR4 RAM. Each node is a VM running Ubuntu Server 14.04 LTS. e initial deployment scheme (denoted as D init ) of the experimental case is shown in Figure 6, which presents the distribution of service instances in the experimental case on these nodes.
Analysis and Verification of the Impact of the Scaling-Out
Operation on Response Time. In this section, based on the initial deployment scheme D init of the experimental case, we select the services in the experimental case to conduct three trials to analyse and verify the positive and negative influences of performing scaling-out operations on response time. In each trial, we execute a scaling-out operation by adding a new instance of S 4 or S 5 and deploy it on one of the nodes.
e specific scaling-out operations are shown in Table 1. en, we obtain the positive and negative influences of each scaling-out operation with our analysis method. In addition, during the experiments, the workload of service is assigned to each instance on average before and after scaling-out operations.
e experimental results are shown in Table 2. In the table, R init and R new are the response time of the experimental case before and after scaling-out operations, respectively. e positive influence represents the decreased value of response time contribution of nodes where existing instances of S 4 and S 5 are deployed after scaling-out operations, while the negative influence represents the increased value of response time contribution of nodes where new instances of S 4 and S 5 are deployed after scaling-out operations.
As we can see from the results, the scaling-out operation in trial 1 makes a negative influence larger than the positive influence, which means that the response time after the scaling-out operation becomes longer than before, so it is not a feasible scaling-out operation, while the scaling-out operations in trials 2 and 3 can make a negative influence less than the positive influence, so they are feasible scaling-out operations for improving the response time of the experimental case. Besides, the positive influences of scaling-out operations in trials 2 and 3 are the same, and it is because the positive influences all come from node n 3 , and in trials 2 and 3, the changes of the workload proportion of instance s 5,3 deployed on n 3 are the same due to the workload-balancing strategy.
Moreover, Table 2 also presents the value of the judgment metric ΔJ mentioned in eorem 1 for each scalingout operation. When ΔJ > 0 (in trials 2 and 3), the entire response time of the experimental case is reduced after executing the corresponding scaling-out operation, and vice versa (in trial 1). erefore, the judgment metric ΔJ can correctly reflect whether the corresponding scaling-out operation can improve the response time of the experimental case, which can also verify the effectiveness of eorem 1. Moreover, comparing trials 2 and 3, the larger the value of ΔJ, the better the improvement in response time, which illustrates that ΔJ can also reflect the improvement degree in response time.
Analysis and Verification of the Algorithm of Scaling Out
Software. In this section, we conduct experiments to further verify the effectiveness of the proposed algorithm in response time improvement through scaling out the experimental case without additional deployment nodes. For the purpose of demonstrating the effectiveness of the algorithm Scientific Programming on the experimental case in different configurations of deployment resources, we conduct three groups of experiments. In group 1, we use the configuration of deployment resources shown in Figure 6, and based on group 1, we generate the configurations in group 2 and group 3, as shown in Table 3.
During the experiments, for each time, we select a type of service to perform a feasible scaling-out operation according to eorem 1. We iterate the abovementioned step until there is no possible scaling-out operation to improve the response time and then terminate the experiments. e first and final deployment schemes are denoted as D init and D final . To compare the effectiveness of our algorithm in different groups, for each group, we have recorded the response times of the experimental case under D init and D final , including the response times obtained from our analysis method (denoted as R a (D init ) and R a (D final ), respectively) and the response Input: SaaS Software SS, initial deployment scheme D init Output: a series of scaling-out operations SO Series � [SO 1 , . . . , SO l , . . . SO x ], which makes the response time of SS better than in D init (1) Select a type of service S i in SS (7) Construct a scaling-out operation SO l � D old , SS, S i , Act(s i,t , n t ), D new / * e operation will add an instance s i,t for S i and deploy it on node n t * / (8) Calculate the judgment metric ΔJ(SO l ) for SO l / * Using equations (2) and (15) * / (9) if ΔJ(SO l ) > 0 do (10) SO Series tmp [l] ⟵ SO l / * Add the operation SO l to the operation series SO Series tmp * / (11) l ⟵ l + 1 (12) D old ⟵ D new (13) end if (14) if there are other possible scaling-out operations do (15) operationSearchFlag ⟵ true (16) else (17) operationSearchFlag ⟵ false (18) end if (19) end while (20) Calculate R(SS) after executing operation series SO Series tmp / * Using equation (7) SO Series ⟵ SO Series tmp / * Update the optimal operation series SO Series with current series SO Series tmp ) in the three experimental groups is 45.5%, 20.3%, and 27.4%, respectively. e average relative error of R a compared to R m (calculated by (|R a − R m |)/R m ·100%) is generally less than 8%, and according to the experimental results, the accuracy is acceptable for finding feasible scaling operations in order to improve the response time of the experimental case. Specifically, when the initial response time of the experimental case is relatively long (group 1 and group 3), the algorithm can bring a relatively large improvement degree of response time. Even if the initial response time in group 2 is not as long as that in the other two groups, the response time is further improved by scaling out the experimental case. erefore, the experiments illustrate that the conclusions of impact analysis and the proposed algorithm can help improve the response time effectively by scaling out software in different configurations of deployment resources, even without additional deployment nodes.
Analysis of the Statistical Significance of Our Method in
Improving Response Time. With the aim of verifying whether the effectiveness of our method in improving the response time of the experimental case is statistically significant, in this section, we further conduct more simulation experiments and perform statistical hypothesis testing to make extensive evaluation of our method. Specifically, based on the experimental case in Figure 5, we conduct three groups of experiments with three different number of initial service instances. Besides, in each group, we also conduct experiments with three different invocation probabilities of services, two different request arrival rates, and two different deployment configurations, i.e., we conduct a total of 36 different experiments.
During the experiments, according to the response times before and after executing scaling-out operations found by our method, we conduct the paired t-test to verify whether the response times of the experimental case after executing the scaling-out operations are significantly different from the initial response times. We consider two different hypotheses as follows: (i) H0: there is no difference between the response times after executing scaling-out operations and the initial response times (ii) H1: there is a difference between the response times after executing scaling-out operations and the initial response times e results of the paired t-test are shown in Table 4, which presents the significance level of the improved response times compared to the initial response times. Concretely, the table shows the number of tests in each group, the degrees of freedom (df ), the standard deviation differences (SD) and the mean of differences of the improved response times with the initial response times, the t value, and the p value.
Generally, for a paired t-test, the significance level is defined as p < 0.05, i.e., if the p value is less than 0.05, we should reject the null hypothesis (H0) and accept the alternative hypothesis (H1). As shown in Table 4, the p value in each group is less than 0.05, so the null hypothesis (H0) is rejected and the alternative hypothesis (H1) is acceptable, which means that the differences between the improved response times and the initial response times are significant, allowing us to conclude that the effectiveness of our method in improving the response time of the experimental case is statistically significant.
Discussion
In the previous section, some experiments have been conducted to evaluate our study, which indicates that the analysis conclusions and the proposed algorithm derived from our study can effectively provide guidance for scaling out SaaS software and improving the response time. Our study is based on the queueing performance model, and its advantage is that it can be used for quantitatively analysing the positive and negative influences of scaling-out operations on the response time of the software in combination with the deployment resources, which can assist in subsequently making decisions for scaling out software with existing deployment resources instead of additional deployment resources. However, we also have identified some limitations. Our current study mainly models the performance of SaaS software, but in the deployment nodes, there also exist some performance interference factors outside of SaaS software, which may not be captured by the model and limit the effectiveness of analysing and improving the response time of software. In the future, we can extend our model to take these interference factors into account and further enhance the effectiveness of performance improvement.
Related Work
In the field of software performance, there are some works aiming to analyse and improve the performance of software and taking into account the scaling techniques. Vondra et al. [12] developed a new simulation tool based on the queueing model to autoscale VMs in the private cloud. Jiang et al. [13] proposed a novel scheme to autoscale cloud resources for web applications. Bouterse et al. [14] proposed a method to dynamically allocate VMs to provide reserved resources for scalable SaaS applications. ese works mainly studied the scaling approaches in combination with resource provisioning. Besides, El Kafhali et al. [15] presented a queueing mathematical model to estimate the amount of VMs required for scaling software in order to satisfy the performance requirement. is work mainly studied the approach of adjusting the number of resources for scaling software. Some studies also considered the elasticity of resources to guarantee the performance of the scalable software. Salah et al. [16] presented an analytical model that can be used to guarantee proper elasticity for cloud-hosted applications and services, in order to satisfy particular performance requirements. Ghobaei-Arani et al. [17] presented a framework called ControCity for controlling resources elasticity through using buffer management and elasticity management by leveraging the learning automata technique. e aforementioned works mainly considered how to adjust the resources to meet the performance improvement requirement of the scalable software. However, how to specifically adjust and scale the software to improve performance also needs to be further studied.
Zhen et al. [18] proposed a method to automatically scale cloud application instances. Wada et al. [7] presented a method to estimate and optimize the performance of services in a cloud environment by leveraging queueing theory, which considered the strategies of scaling cloud services. ese works mainly require adjusting deployment resources while scaling software. Different from the abovementioned works, we aim to improve the response time of SaaS software based on scaling out software without additional adjustment of deployment resources. erefore, in our work, we analysed both the positive and negative influences of scaling out SaaS software on response time, and based on the abovementioned foundation, we proposed an algorithm of scaling out SaaS software to improve the response time effectively while saving deployment resources.
Conclusions
In this paper, we defined the scaling-out operation of SaaS software and analysed the impact of the scaling-out operation on response time by leveraging queueing theory. e experiments showed that the analysis conclusions can be used to determine whether a scaling-out operation can effectively improve the response time of the software, which can be the basis of making decision for scaling out software. Based on the analysis conclusions, we further proposed an algorithm of scaling out SaaS software for response time improvement. e experiments demonstrated that the response time can be reduced effectively after scaling out the software with the proposed algorithm, which further illustrated that the analysis conclusions obtained from this paper can play a guiding role in scaling out software and improving response time while saving deployment resources. In the future, we will further explore how to apply the analysis conclusions to optimize the performance of SaaS software. 10 Scientific Programming
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare no conflicts of interest regarding the publication of this paper. | 8,832.6 | 2021-04-03T00:00:00.000 | [
"Computer Science"
] |
Axial $U(1)$ anomaly in a gravitational field via the gradient flow
A regularization-independent universal formula for the energy--momentum tensor in gauge theory in the flat spacetime can be written down by employing the so-called Yang--Mills gradient flow. We examine a possible use of the formula in the calculation of the axial $U(1)$ anomaly in a gravitational field, the anomaly first obtained by Toshiei Kimura [Prog.\ Theor.\ Phys.\ {\bf 42}, 1191 (1969)]. As a general argument indicates, the formula reproduces the correct non-local structure of the (axial $U(1)$ current)--(energy--momentum tensor)--(energy--momentum tensor) triangle diagram in a way that is consistent with the axial $U(1)$ anomaly. On the other hand, the formula does not automatically reproduce the general coordinate (or translation) Ward--Takahashi relation, requiring corrections by local counterterms. This analysis thus illustrates the fact that the universal formula as it stands can be used only in on-shell correlation functions, in which the energy--momentum tensor does not coincide with other composite operators in coordinate space.
Introduction
Almost half a century ago, just nine months after the appearance of two seminal papers on the axial U (1) anomaly in an electromagnetic field [1,2], Kimura noticed in a lesser-known but remarkable paper [3] that a similar anomalous non-conservation of the axial vector current also occurs in a gravitational field. His result was D α ψ (x)γ α γ 5 ψ(x) − 2m 0 ψ (x)γ 5 ψ(x) = 1 384π 2 ǫ µνρσ R µνλτ (x)R ρσ λτ (x), (1.1) where m 0 is the bare mass of the fermion and R µνρσ (x) is the Riemann curvature. This axial U (1) anomaly, also obtained in Refs. [4,5] (see also Refs. [6,7]), was the first example of the quantum anomaly related to the gravitational interaction, a subject that was to be extensively explored somewhat later in a wider context [8,9]. Recently, by employing the so-called Yang-Mills gradient flow [10][11][12][13][14] (see Refs. [15,16] for reviews) and the small flow time expansion [13], a regularization-independent universal formula for the energy-momentum tensor in gauge theory in the flat spacetime has been constructed [17,18] (see also Ref. [19] for a review); the formula is then applied to the computation of thermodynamic quantities in lattice QCD [20][21][22][23][24][25][26]. References represent a partial list of developments relating to the gradient flow.
In this paper, we examine a possible use of the universal formula in the calculation of the axial U (1) anomaly (1.1); we will obtain Eq. (1.1) by expansion around the flat spacetime. Precisely speaking, the anomaly is a clash between the axial U (1) Ward-Takahashi (WT) relation and the general coordinate (and the local Lorentz) WT relation. A general argument given in Ref. [3], which is analogous to the argument in Ref. [1], shows that the anomaly (1.1) is independent of the adopted regularization as long as the regularization is physically sensible and one imposes the general coordinate WT relation; the structure (1.1) is robust in this sense.
In what follows, we will observe that the universal formula does not automatically reproduce the correct WT relation associated with the general coordinate (or translation in the flat spacetime) WT relation. The resulting correlation functions, however, can be modified by adding appropriate local terms so that the translation WT relation holds. Then, as the general argument implies, we have Eq. (1.1). This shows that the universal formula reproduces the correct non-local structure of the (axial U (1) current)-(energy-momentum tensor)-(energy-momentum tensor) triangle diagram in a way that is consistent with the axial U (1) anomaly. This is expected without any calculation from the construction of the universal formula [17,18], but to check this point explicitly is certainly assuring. On the other hand, this analysis illustrates that the universal formula as it stands can be used only in on-shell correlation functions, in which the energy-momentum tensor does not coincide with other composite operators in coordinate space, because it does not automatically reproduce the translation WT relation when operators coincide. How to remedy this point in (a generalization of) the universal formula is a forthcoming challenge.
This paper is organized as follows. In Sect. 2, we summarize the naively expected form of the axial U (1) and the general coordinate (or translation) WT relations in the flat spacetime limit. The breaking of these relations is regarded as the quantum anomaly. In Sect. 3, using the universal formulas for the energy-momentum tensor of the Dirac fermion [18] and the axial U (1) current [35,42], we compute the total divergences of the triangle diagram and extract the parts potentially corresponding to the anomaly. Each of the axial U (1) current and the energy-momentum tensors can possess a different flow time, t 1 , t 2 , and t 3 ; these are eventually taken to be zero. We adopt a particular ordering of the limits, which turns out to considerably simplify the calculation. We find that the translation WT relation does not hold. In Sect. 4, we seek an appropriate local term added to the triangle diagram, which restores the translation WT relation. Although our analysis here is quite analogous to that of Ref. [35] on the triangle anomaly in gauge theory, partially due to the fact that the translation WT relation also contains some two-point functions, the analysis is much more complicated. Finally, in Sect. 5, by adding appropriate local terms, we obtain the expansion of Eq. (1.1) around the flat spacetime. Section 6 is devoted to the conclusion.
Naively expected form of Ward-Takahashi relations
We consider the Dirac fermion in the curved spacetime with a Euclidean signature. The curved space indices are denoted by Greek letters while the local Lorentz indices are denoted by Latin letters. Letting e a µ (x) be the vierbein, the raising and lowering of the former indices are done by the metric g µν (x) ≡ δ ab e a µ (x)e b ν (x) and its inverse matrix g µν (x); those of the latter indices are, on the other hand, done by the Kronecker deltas, δ ab and δ ab .
The action of the Dirac fermion in the curved spacetime is given by γ a is the Dirac matrix satisfying {γ a , γ b } = 2δ ab and σ ab ≡ 1 2 [γ a , γ b ]. ω ab µ (x) is the spin connection, which is defined by The coupling of the fermion to a gravitational field is given by the energy-momentum tensor: where we have defined T anti-sym.
where σ µν (x) ≡ e a µ (x)e b ν (x)σ ab . We note that the anti-symmetric part of the energymomentum tensor T anti-sym.
µν (x) is proportional to the equation of motion of the fermion. Now, in order to determine the precise form of the quantum anomalies, it is crucial to clearly recognize the form of naively expected WT relations. For simplicity, we consider the massless fermion m 0 = 0 in what follows.
We start from the WT relation associated with the axial U (1) symmetry. For this, we take the correlation function where dµ denotes the functional integration measure for the fermion field, and make the change of integration variables in the form of a localized axial U (1) transformation: 1 Noting that the action (2.1) changes under this change of variables as and considering the flat spacetime limit e a µ (x) → δ a µ , neglecting a possible breaking of the symmetry associated with the regularization, we have the identity where the first equality follows from the covariance under the Lorentz and parity transformations in the flat spacetime. The breaking of this naively expected relation is thus regarded as a quantum anomaly. Next, we consider the WT relation associated with the general coordinate invariance and the local Lorentz symmetry. For this, we start with and consider the following form of the change of integration variables: This is a particular combination of the general coordinate transformation and the local Lorentz transformation. Under this change of integration variables, from the fact that the 1 The chiral matrix γ 5 is defined by γ 5 ≡ 1 4! ǫ abcd γ a γ b γ c γ d by the totally anti-symmetric tensor ǫ abcd being normalized as ǫ 0123 = 1. action does not change if the vierbein is also changed by the same set of transformations, we have where the total energy-momentum tensor is given by Eq. (2.6). Considering the flat spacetime limit, we thus have the identity where again the first equality follows from the covariance under the Lorentz and parity transformations. We have introduced the combination On the other hand, by considering the change of integration variables of the form of the local Lorentz transformation, in Eq. (2.13), we have the following identity in the flat spacetime limit: where the last equality again follows from the covariance under the Lorentz and parity transformations and we have defined Thus, combining Eqs. (2.16) and (2.19), we have the relation in the flat spacetime limit: is the naively expected form of the WT relation associated with the general coordinate invariance and the local Lorentz symmetry (in the flat spacetime limit). Thus, the breaking of this relation should be regarded as a quantum anomaly. It can be confirmed that one can directly derive the WT relation (2.22) only by using the translational invariance in the flat spacetime (see Appendix A). The last three two-point functions including O 1 , O 2 , and O 3 play a crucial role in the following analysis of the anomaly. The contribution of such "two-sided diagrams" in addition to the triangle diagram, which have no analogue in the axial U (1) anomaly in gauge theory, has of course already been noted in Ref. [3] (through a somewhat different derivation from ours).
Computation of anomalies 3.1. Definition of the three-point function
Now, for the vector-like gauge theory in the flat spacetime, we know representations of the axial vector current [35,42] and the symmetric energy-momentum tensor [17,18,20] by the small flow time limit of flowed fields. In the zeroth order in the gauge coupling, the representations are rather trivial: where χ(t, x) andχ(t, x) are flowed fermion fields and eventually we have to take the small flow time limit t → 0 in the correlation functions. Then, using the tree-level propagator of the flowed fermion field [14], 2
4)
2 Throughout this paper, we use the abbreviation In this definition, we have adopted a particular ordering of the small time limit; we first set t 2 = t 3 → 0 and then t 1 ≡ t → 0. Because of the Gaussian damping factors e −(t1+t2)p 2 , e −(t2+t3)q 2 , and e −(t3+t1)k 2 in the first definition, the momentum integration is absolutely convergent as long as t 1 ≡ t > 0; we can thus trivially take the first limit t 2 = t 3 → 0 inside the momentum integration. It turns out that this particular ordering considerably simplifies the actual calculation of the anomalies below. 3 We also note that the expression (3.5) does not require further regularization; in other words, Eq. (3.5) is independent of the adopted regularization in the limit in which the regulator is sent to infinity. This shows the universality of the representations (3.1) and (3.2), although this finiteness is trivial in the present zerothorder perturbation theory in the gauge coupling.
Anomaly in the axial WT relation
We are primarily interested in the anomalous divergence of the axial vector current, the breaking of the WT relation (2.12). From our definition (3.5), after careful rearrangements, we find the identity (omitting the symbol lim t→0 ) 4 where F.T. denotes the Fourier transformation: On the left-hand side of Eq. (3.6), the two-point functions have been defined by These regularized two-point correlation functions identically vanish, as should be the case from the Lorentz and parity covariance. In deriving Eq. (3.6), we first apply ∂ x α to Eq. (3.5). In the integrand, this produces the factor / p − / k; each term of this is canceled by 1// p and 1// k. We then use the identities p + q = (p − k) + (q + k) and q + k = (k − p) + (p + q) and express the momentum (p − k) α by the derivative −i∂ x α . These manipulations give rise to the right-hand side of Eq. (3.6). The last two terms on the left-hand side of Eq. (3.6) are simply zero, as noted above; the 4 We have noted that the spinor trace with γ 5 requires at least other four Dirac matrices. 8 inclusion of those terms, however, clearly shows the correspondence to the naively expected axial WT relation (2.12).
Thus, comparing Eq. (3.6) and Eq. (2.12), we find that the anomalous breaking of the axial symmetry is given by the right-hand side of Eq. (3.6). We note that in Eq. (3.6) if Gaussian factors such as e −tp 2 and e −tk 2 are simply unity (i.e., if we could naively set t → 0 before the momentum integration), then the right-hand side identically vanishes. The fact is that there are Gaussian factors and they give rise to a non-vanishing result. After a straightforward calculation in the t → 0 limit, we find (3.10)
Anomaly in the translation WT relation
Next, we investigate the anomalous breaking of the translation WT relation (2.22). From Eq. (3.5), after careful rearrangements by using the relation 9 we have the identity 5 In this expression, the two-point functions on the left-hand side have been defined by and The two-point functions in Eqs. (3.13)-(3.15) identically vanish as should be the case from the Lorentz and parity covariance. On the right-hand side of Eq. (3.12), the last two lines change sign under the change of integration variables, p → −k and k → −p. Thus, those two lines identically vanish. The other three terms do not vanish and after a tedious calculation in the limit t → 0, we have
Axial anomaly in the two-point functions
As we will see, however, the axial anomaly in the two-point function (3.22) can be removed by adding an appropriate local term to the two-point function (3.18).
Local counterterms
Now, the anomalous breaking of the axial WT relation in Eq. (3.10) would have intrinsic meaning only when we require the validity of the translation WT relation (2.22). That is, we still have the freedom to modify the local part of the three-point correlation function (3.5) by adding a "local counterterm" C α,µν,ρσ (x, y, z). In the momentum space, it must be a cubic polynomial of external momenta. We see that the general form of the counterterm that is consistent with the symmetric structure of the three-point function (3.5) is given by + ǫ αµβγ p β q γ (e 1 p ν δ ρσ + e 2 p ρ δ νσ + e 3 q ν δ ρσ + e 4 q ρ δ νσ ) − ǫ αρβγ p β q γ (e 3 p σ δ µν + e 4 p µ δ νσ + e 1 q σ δ µν + e 2 q µ δ νσ ) where c i , d i , e i , and f i are constants. The basic idea is to choose the coefficients c i , d i , e i , and f i so that the right-hand side of Eq. (3.19) vanishes after the addition j 5α (x)T sym. µν (y)T sym. ρσ (z) + C α,µν,ρσ (x, y, z). Then, to the axial anomaly (3.10), the counterterm contributes by A complication arises, however, since we may also modify the two-point functions (3.16)-(3.18) appearing in the relation (3.19) by adding local terms. We choose the counterterms for the two-point functions such that the axial U (1) WT relations hold for the two-point functions.
For the two-point function (3.16), we thus require the validity of the axial WT relation, where S 1α,β,ν,ρσ (x, z) is a local term. Equation (3.20), however, shows that there is no axial anomaly in this two-point function and thus we should require ∂ x α S 1α,β,ν,ρσ (x, z) = 0. It turns out that the most general form of such a local term is Similarly, for the two-point function (3.17), requiring implies ∂ x α S 2α,µν,ρσ (x, z) = 0 because of Eq. (3.21) and the possible form of the counterterm is given by Finally, after some examination, we find that the most general form of the counterterm for the function (3.18) is given by We choose the coefficients c ′ 0 etc. so that the addition of S 1α,β,µν,ρσ (x, z) to the two-point function cancels the anomalous breaking (3.22). That is, we require This yields Now, we require that the translation WT relation (2.22) holds by adding the above local terms to the correlation functions. That is, our requirement is (4.10) The resulting relations among the coefficients in the counterterms are summarized in Appendix B. From those relations, we see that some coefficients are still left unfixed, but the coefficients in the expression (4.2) are completely determined as 14) This gives (4.17)
Final steps
We are now able to write down the axial U (1) anomaly in the three-point function (3.5) under the requirement of the translation WT relation (4.10); the latter requirement is accomplished by the counterterm (4.1). Then the axial U (1) anomaly is given by the sum of Eqs. This is the most non-trivial result of this paper. Going back to Eq. (2.6), the energy-momentum tensor also has a part that is antisymmetric under the exchange of indices, T anti-sym.
µν (x) in Eq. (2.8) is proportional to the equation of motion; its effect on the correlation functions must be at most local contact terms as the Schwinger-Dyson equation implies. We can in fact corroborate this argument by explicit calculations by using some regularizing prescription for T anti-sym.
µν (x). Here, however, we are content with the above argument and set T µν (x) → T sym.
µν (x) in what follows. We now re-express Eq. (5.1) as the anomalous divergence of the axial U (1) current in the curved spacetime. We expand D α j 5α (x) g , the divergence of the axial vector current in the curved spacetime as the power series of the vierbein around the flat spacetime: where δe µa (x) ≡ e µa (x) − δ µa and it is understood that the right-hand side is evaluated in the flat spacetime with appropriate local counterterms specified as above. Noting as far as the regularization preserves the Lorentz and parity covariance, we have where we have used [δ/δe νa (x)]S = e µ a (x)T sym. µν (x), δg µν (x) = δe µa (x)e ν a (x) + e µa (x)δe ν a (x) and Eq. (5.1) in the last equality. Comparing this with the expansion of the curvature, 6 we finally observe Eq. (1.1) for m 0 = 0.
Conclusion
In this paper, we have examined a possible use of the universal formula for the energymomentum tensor in gauge theory in the flat spacetime through the Yang-Mills gradient flow [17,18]. As a general argument indicates, after choosing local counterterms appropriately so as to restore the translation WT relation, we obtain the correct axial U (1) anomaly in Eq. (1.1) (in the flat space limit). From the present analysis, we can learn the following feature of the universal formula of the energy-momentum tensor. The universal formula is based on the gradient flow and its small flow time expansion of Ref. [13]. The latter asserts that any composite operator of flowed fields as t → 0 can be expressed as an asymptotic series of renormalized composite operators of unflowed fields with increasing mass dimensions. When two composite operators of flowed fields collide in coordinate space to form another composite operator, we have to consider the expansion in terms of another set of renormalized composite operators of unflowed fields. Consequently, it is not obvious what happens when the universal formula of the energy-momentum tensor collides with other composite operators, such as the axial U (1) current or the energy-momentum tensor, in coordinate space. Our present analysis illustrates that the formula in fact does not automatically fulfill the translation WT relation precisely when the formula coincides with other composite operators in coordinate space. On the other hand, our finding that local counterterms are sufficient to restore the translation WT relation ensures the expectation that the formula fulfills the translation WT relation when the energy-momentum tensor is in isolation in coordinate space; for this case, the translation WT relation is simply the conservation law of the energy-momentum tensor. 6 Our definition of the Riemann curvature is R α ρµν ≡ ∂ µ Γ α ρν − ∂ ν Γ α ρµ + Γ α λµ Γ λ ρν − Γ α λν Γ λ ρµ , where Γ λ µν ≡ 1 2 g λρ (∂ µ g νρ + ∂ ν g µρ − ∂ ρ g µν ) is the Christoffel symbol.
Thus, our analysis has revealed that the universal formula as it stands can be used only in on-shell correlation functions (i.e., correlation functions in which the energy-momentum tensor does not coincide with other composite operators in coordinate space). The incorporate of this point into (a generalization of) the universal formula is a forthcoming challenge. 7 A related issue is the possible generalization of the gradient flow to the curved spacetime. A possible generalization is ∂ t B µ (t, x) = g νρ (x)D ν G ρ,µ (t, x), B µ (t = 0, x) = A µ (x), (6.1) ∂ t χ(t, x) = g µν (x)D µ D ν χ(t, x), χ(t = 0, x) = ψ(x), (6.2) 3) It then appears interesting to see whether this setup improves the covariance under the general coordinate transformation and the restoration of the associated WT relations for the energy-momentum tensor. | 5,145 | 2018-03-12T00:00:00.000 | [
"Physics"
] |
Absence of Apolipoprotein E is associated with exacerbation of prion pathology and promotes microglial neurodegenerative phenotype
Prion diseases or prionoses are a group of rapidly progressing and invariably fatal neurodegenerative diseases. The pathogenesis of prionoses is associated with self-replication and connectomal spread of PrPSc, a disease specific conformer of the prion protein. Microglia undergo activation early in the course of prion pathogenesis and exert opposing roles in PrPSc mediated neurodegeneration. While clearance of PrPSc and apoptotic neurons have disease-limiting effect, microglia-driven neuroinflammation bears deleterious consequences to neuronal networks. Apolipoprotein (apo) E is a lipid transporting protein with pleiotropic functions, which include controlling of the phagocytic and inflammatory characteristics of activated microglia in neurodegenerative diseases. Despite the significance of microglia in prion pathogenesis, the role of apoE in prionoses has not been established. We showed here that infection of wild type mice with 22L mouse adapted scrapie strain is associated with significant increase in the total brain apoE protein and mRNA levels and also with a conspicuous cell-type shift in the apoE expression. There is reduced expression of apoE in activated astrocytes and marked upregulation of apoE expression by activated microglia. We also showed apoE ablation exaggerates PrPSc mediated neurodegeneration. Apoe−/− mice have shorter disease incubation period, increased load of spongiform lesion, pronounced neuronal loss, and exaggerated astro and microgliosis. Astrocytes of Apoe−/− mice display salient upregulation of transcriptomic markers defining A1 neurotoxic astrocytes while microglia show upregulation of transcriptomic markers characteristic for microglial neurodegenerative phenotype. There is impaired clearance of PrPSc and dying neurons by microglia in Apoe−/− mice along with increased level of proinflammatory cytokines. Our work indicates that apoE absence renders clearance of PrPSc and dying neurons by microglia inefficient, while the excess of neuronal debris promotes microglial neurodegenerative phenotype aggravating the vicious cycle of neuronal death and neuroinflammation. Supplementary Information The online version contains supplementary material available at 10.1186/s40478-021-01261-z.
Fig. S1
Infection of Apoe -/-mice with ME7 mouse adapted scrapie strain causes significant shortening of the prion disease incubation period. To assure the effect of Apoe -/-on the prion pathology is not specific to the 22L strain we intraperitonealy inoculated 8 -10 week-old WT and Apoe -/-mice of both sexes ( 50%:50% female to male ratio) with ME7 infectious and normal brain homogenate (NBH). Unlike the 22L strain, the ME7 strain does not replicate in non-neuronal cells and has slightly longer incubation period. Shown is Kaplan-Meier estimator of the incubation time in ME7 and NBH inoculated WT and Apoe -/-mice. The x-axis denotes days post inoculation (dpi) while the y-axis a percent of animals, which remain asymptomatic from the initial groups of 11 -13 ME7 and 22 -23 NBH inoculated WT and Apoe -/-mice. p < 0.0001 denotes the significance between 22L WT and 22L Apoe -/-groups (Log-rank test). Differences between 22L Apoe -/-and NBH Apoe -/-and between 22L WT and NBH WT, which are not shown on the graph also are significant at p < 0.0001. The difference between NBH WT and NBH Apoe -/-is not statistically significant.
ApoE Modulates Prion Pathology
Pankiewicz, JE et al. and e Body condition, which were assigned based on the following criteria 0 = normal, 1 = subtle, 1.5 = mild, 2 = moderate, 2.5= advanced, and 3 = severe. The tally of all subscores makes the Total Scrapie Score depicted in Fig. 2b. The mice were serially assessed starting from the 100 th day post inoculation by two independent examiners blinded to the animal genotype. All data represent mean SEM from n = 11 -12 mice per group. a -e p < 0.0001 (2-way ANOVA).
Fig. S3
PrP deposition has predilection to the thalamus and to the layer V of the neocortex and is more prominent in 22L Apoe -/-mice. Shown are representative microphotographs of coronal brain section from NBH and 22L inoculated WT and Apoe -/-mice at 23 wpi. The sections were immunostained against PrP. Scale bar: 350 μm. Abbreviations: Hip -hippocampus, L V -layer V, S1 Ctx -primary somatosensory cortex, and Th -thalamus. Apoe -/-astrocytes (e -/-), which were resolved using native-PAGE and SDS-PAGE, respectively. c and d Show is immunoblot analysis of the total PrP protein and that of proteinase K (PK) resistant PrP Sc in N2A/22L cells treated with astrocytic media containing natively lipidated apoE4 or control media from Apoe -/-astrocytes for 96 hrs., respectively. Also included is -actin as the loading control in c.
Fig. S5
Prion related expression of C3 by astrocytes is upregulated in the absence of apoE.
Shown are representative epifluorescent microphotographs of astrocytes in the layer V of the S1 cortex from mice of indicated experimental groups, which were double immunostained against C3 and GFAP. There is no C3 expression in astrocytes from control, NBH inoculated WT and Apoe -/-mice. At 15 wpi, C3 expression is detectable in astrocytes of 22L Apoe -/-mice but not in 22L WT mice, while at 23 wpi it is detectable in both genotypes but is significantly higher in 22L Apoe -/-mice. Scale bar: 40 μm. Abbreviations: Hip -hippocampus, L V -layer V, S1 Ctx -primary somatosensory cortex, and Th -thalamus. Asterisks demarcates limits of the ventroposterior thalamic nucleus (VPN), which along with the S1 cortex was selected for the quantitative analysis of Iba + microglia load.
Fig. S7
Apoe -/-is associated with upregulation of P2RY12 and TMEM119 homeostatic microglia markers. Analysis of a P2ry12 and b Tmem119 mRNA level. The qRT-PCR results are presented as the ΔC T values (n = 3 -11 mice/group). c Shown are representative epifluorescent microphotographs of microglia in the layer V of the S1 cortex from mice of indicated experimental groups, which were immunostained against TMEM119 and d the quantitative analysis of TMEM119 load in the S1 cortex, respectively (n = 5 -7 mice/group). a, b, and d p < 0.0001 (ANOVA); *p < 0.05, **p < 0.01, and ****p < 0.0001 (Holm's-Sidak's post hoc test). Values in a, b, and d represent mean + SEM. Scale bar: 40 μm in c. | 1,391.6 | 2021-09-26T00:00:00.000 | [
"Biology"
] |
Reversal of Synapse Degeneration by Restoring Wnt Signaling in the Adult Hippocampus
Summary Synapse degeneration occurs early in neurodegenerative diseases and correlates strongly with cognitive decline in Alzheimer’s disease (AD). The molecular mechanisms that trigger synapse vulnerability and those that promote synapse regeneration after substantial synaptic failure remain poorly understood. Increasing evidence suggests a link between a deficiency in Wnt signaling and AD. The secreted Wnt antagonist Dickkopf-1 (Dkk1), which is elevated in AD, contributes to amyloid-β-mediated synaptic failure. However, the impact of Dkk1 at the circuit level and the mechanism by which synapses disassemble have not yet been explored. Using a transgenic mouse model that inducibly expresses Dkk1 in the hippocampus, we demonstrate that Dkk1 triggers synapse loss, impairs long-term potentiation, enhances long-term depression, and induces learning and memory deficits. We decipher the mechanism involved in synapse loss induced by Dkk1 as it can be prevented by combined inhibition of the Gsk3 and RhoA-Rock pathways. Notably, after loss of synaptic connectivity, reactivation of the Wnt pathway by cessation of Dkk1 expression completely restores synapse number, synaptic plasticity, and long-term memory. These findings demonstrate the remarkable capacity of adult neurons to regenerate functional circuits and highlight Wnt signaling as a targetable pathway for neuronal circuit recovery after synapse degeneration.
showing synapse loss and defects in synaptic plasticity and longterm memory. They also reveal that cessation of Dkk1 expression induces synapse regeneration and recovery of long-term memory.
INTRODUCTION
Synapse loss and dysfunction are an early occurrence in several neurodegenerative conditions, including Alzheimer's disease (AD). Synapse vulnerability strongly correlates with cognitive decline before detectable neuronal death [1,2] and might contribute to the subsequent neuronal degeneration. Surprisingly, little is known about the molecular mechanisms that trigger synapse vulnerability in neurodegenerative diseases and even less about how this process can be prevented or reversed.
Increasing evidence suggests that deficient canonical Wnt signaling contributes to AD pathogenesis. Wnts are secreted proteins that modulate several aspects of brain development and function, including synapse formation, synaptic transmission, experience-mediated synaptic remodeling, and adult neurogenesis [3][4][5][6][7]. Genome-wide association studies (GWASs) have revealed a link between genetic variants of the Wnt co-receptor LRP6, which are associated with decreased canonical Wnt signaling activity, and late onset AD [8,9]. Loss of function of LRP6 in hippocampal neurons results in synaptic defects, cell death, and exacerbation of amyloid deposition in a mouse model of AD [10]. Importantly, the secreted protein Dickkopf-1 (Dkk1), which blocks canonical Wnt-Gsk3 signaling by sequestering the LRP6 receptor [11,12], is elevated in post-mortem brains from AD patients and in AD animal models [13][14][15]. In addition, oligomers of amyloid-b (Ab), the main component of amyloid plaques in AD, induce Dkk1 expression in cultured neurons and in brain slices [13,16,17]. Dkk1 disassembles excitatory synapses in a similar manner to Ab in cultured hippocampal neurons [17]. Importantly, blockade of Dkk1 with neutralizing antibodies protects synapses from Ab-mediated disassembly [17]. Collectively, these results suggest that Dkk1-mediated deficiency of Wnt signaling could contribute to synapse vulnerability. However, the impact of Dkk1 on hippocampal circuits, which are severely affected in AD, and its mechanism of action have not been explored.
Restoration of synaptic function after substantial synapse loss is crucial for the treatment of neurodegenerative diseases, as diagnosis is often obtained after significant damage has occurred. Although some downstream targets of Ab have been identified [18][19][20][21], only a limited number of studies has shown the ability of these molecules to fully restore function after significant synapse degeneration [18,20]. Thus, the identity of the signaling pathways that could restore synapse function remains poorly understood.
Here, we demonstrate a critical role for Wnt signaling in synapse stability and synaptic plasticity in the adult hippocampus. Using a transgenic mouse model that allows inducible expression of Dkk1, we investigated the contribution of deficient Wnt signaling to synapse function in the adult hippocampus without compromising embryonic and postnatal development. Inducible Dkk1 expression triggers disassembly of excitatory synapses, defects in long-term potentiation (LTP), and facilitation of long-term depression (LTD). Consistent with these synaptic plasticity changes, hippocampal-mediated long-term memory is impaired. These synaptic deficits occur in the absence of cell death or changes in the stem cell niche. Thus, the Dkk1 inducible (iDkk1) mouse is a good model system to study synapse degeneration in the absence of cell loss. Our studies reveal that Dkk1 induces synapse degeneration through the combined activation of Gsk3 and a novel target of Dkk1, the RhoA-Rock pathway. Notably, we found that reactivation of Wnt signaling, by cessation of Dkk1 expression, results in full recovery of synapse structure, synaptic plasticity, and long-term memory. In summary, our studies demonstrate that deficient Wnt signaling leads to synapse loss in vivo as observed at early stages of Abmediated pathogenesis and reveal the remarkable regenerative capacity of neurons in the adult hippocampus to assemble synapses within functional circuits. Our work highlights the importance of Wnt signaling in this process and identifies new targetable molecules for protecting synapses from degeneration.
Inducible Dkk1-Expressing Mice as a Model for Wnt Deficiency in the Adult Brain
To investigate the contribution of Wnt signaling to synapse maintenance in the adult hippocampus, we took advantage of a transgenic mouse model where expression of a potent and specific secreted Wnt antagonist, Dkk1, is controlled under the tetracycline-inducible system and CaMKII promoter [22]. Expression of Dkk1 is induced in adult mice by administration of doxycycline, bypassing any potential deleterious effects of deficient Wnt signaling during embryogenesis and postnatal development, stages when Wnt signaling plays a critical role [12,[23][24][25]. Mice carrying the Dkk1 coding region under the control of the doxycycline responsive element (tetO) [26] were crossed to mice carrying the tetracycline-controlled transactivator (rtTA2S; rtTA hereafter) downstream of the CaMKIIa promoter (CaMKII hereafter) [27]. Dkk1 expression was induced in adult (3-6 months of age) double transgenic mice (iDkk1) by administration of doxycycline into their diet for 2 weeks for full induction of the CaMKII-rtTA/tetO system [28] ( Figure 1A). Dkk1 expression was detected by RT-PCR in the hippocampus of adult iDkk1 mice fed with doxycycline, but not in control littermates fed with doxycycline or in iDkk1 mice not fed with doxycycline ( Figure 1B). Dkk1 mRNA expression could be detected after 3 days of induction and sustained for the duration of doxycycline administration ( Figure 1B). Thus, expression of Dkk1 is tightly regulated by doxycycline in iDkk1 mice. Most of our studies were performed after 2 weeks induction (unless otherwise indicated) when expression of Dkk1 was clearly observed by in situ hybridization ( Figure 1C) in a large number of principal hippocampal neurons in the CA1, CA3, and dentate gyrus (DG). These mice developed normally and had similar weight to control mice ( Figure S1). Dkk1 Does Not Affect Cell Death or the Stem Cell Niche in the Adult Hippocampus Deficiency in Wnt signaling has been implicated in regulating cell viability and the stem cell niche in the adult hippocampus [13,29,30]. We therefore examined these two aspects in the hippocampus of adult iDkk1 mice expressing Dkk1 for 2 weeks. TUNEL assays and the levels of cleaved caspase 3 revealed no changes in cell death ( Figures S2A-S2C). The number of NeuN-positive neurons was not altered ( Figure 1D) after 14 days or after 3.5 months of Dkk1 induction. These findings demonstrate that induced Dkk1 expression in the adult hippocampus does not affect cell viability.
Next, we examined possible changes in the stem cell niche in the adult DG, the main source of neuronal stem cells in the hippocampus. The number of newly born neurons, labeled by the specific marker doublecortin (Dcx) [31], did not change upon Dkk1 induction ( Figure S2D). Consistent with no changes in cell number, the overall morphology of the brain and the architecture of the hippocampus were normal (Figures S2A and S2B). Thus, induced expression of Dkk1 in the adult hippocampus does not affect cell viability or the stem cell niche.
Wnt Signaling Blockade in the Adult Hippocampus Results in Long-Term Memory Deficits
The hippocampus plays a role in emotional and cognitive functions, such as anxiety, learning, and memory [32,33]. We investigated the impact of Dkk1 expression in these processes. The exploratory activity and anxiety level, evaluated through an open-field and elevated plus maze, were identical in iDkk1 mice and controls ( Figures S3A and S3B). In addition, no defects were observed in the swimming speed and traveled distance in a Morris water maze (MWM) ( Figure S3C), demonstrating that Dkk1 expression does not affect motor function and hippocampal-dependent emotional behaviors.
Next, we investigated short-term memory using the discrete trial version of the spontaneous alternation T-maze test (30-s delay) [34]. This task depends on the animals' natural tendency to alternate and enter the previously unvisited arm. Both control and iDkk1 mice alternated between the two arms above chance (Figure S3D), indicating that short-term memory is unaffected in iDkk1 mice. We then evaluated hippocampus-dependent spatial reference learning and long-term memory using the MWM test [35][36][37]. Mice were first trained on the cued version of the task (platform marked by a visible flag). No difference in the time required to reach the visible platform was observed between control and iDkk1 mice ( Figure 1E), demonstrating that iDkk1 mice have no visual and procedural skills defects. Subsequently, mice were trained over 5 days to locate an invisible platform. The platform was removed during two probe tests (before the 4 th day and 24 hr after the 5 th day of training). iDkk1 mice took twice as long as controls to find the hidden platform on the 3 rd and 4 th days of training ( Figure 1E), demonstrating an inability to remember the location of the platform. Similarly, during the first probe test (probe I), iDkk1 mice spent less time in the target quadrant and crossed the virtual platform location significantly fewer times than controls ( Figures 1F and 1G), demonstrating impaired reference memory acquisition. Interestingly, after two further training days, iDkk1 mice reached the same performance level as control mice (probe II; Figures 1F and 1G), suggesting that additional training can overcome this memory deficit, as shown in some AD mouse models [38][39][40]. Thus, deficient Wnt signaling in the adult hippocampus leads to deficits in spatial memory acquisition.
To extend our study of memory-related hippocampal function, we used a single-trial contextual fear-conditioning paradigm [41, 42]. We compared the percentage of freezing time displayed by mice when re-introduced into the conditioning chamber after having associated the context to a foot shock. iDkk1 mice showed a considerably reduced freezing time compared to controls upon reintroduction to the conditioning chamber 24 hr after the context/shock single pairing ( Figure 1H). This result indicates that iDkk1 mice were unable to form a strong association between the contextual cues and the foot shock. Together, our behavioral studies show that iDkk1 mice exhibit deficits in hippocampal-dependent long-term memory.
Deficient Wnt Signaling Impairs Basal Synaptic Transmission and Synaptic Plasticity
Changes in long-term memory have been correlated with changes in long-term synaptic plasticity (i.e., LTP and LTD) [43][44][45]. We therefore investigated the ability of iDkk1 mice to express LTP at Schaffer collateral (SC)-CA1 synapses. A thetaburst stimulation (TBS) protocol was chosen as it mimics hippocampal activity during spatial learning [46]. TBS induced a 40% potentiation in control mice, whereas in iDkk1 mice it failed to potentiate these synapses (Figure 2A), demonstrating that Wnt blockade in the adult hippocampus results in the absence of TBS-induced LTP.
This defect could be due to a decreased connectivity, as a minimal number of synapses is required to promote LTP induction as defined as cooperativity [47]. Analyses of input-output curves at the SC-CA1 synapses revealed a defect at the strongest intensities of stimulation in iDkk1 mice, as the field excitatory postsynaptic potential (fEPSP) slope reached only half the magnitude of control animals ( Figure 2B). Thus, CA1 synaptic connectivity is affected by Dkk1 expression.
LTD is crucial to synaptic function, and its modulation by Wnt signaling remains unknown. To examine the impact of Dkk1 on LTD, we used a protocol that effectively induces LTD in adult mice with a strong low-frequency stimulation (LFS) consisting of two trains of 900 pulses at 2 Hz. With this protocol, we observed a 20%-30% depression at the SC-CA1 synapses in both control and iDkk1 animals ( Figure S4). We therefore decided to use a sub-threshold LFS (weak LFS) protocol, which has been shown to unmask enhanced LTD after exposure to Ab [48,49]. We used a weak LFS protocol, consisting of a single train of 900 pulses at 2 Hz, which induced a short-term but no long-term depression in control animals ( Figure 2C) [50]. In contrast to control animals, this weak LFS induced LTD in iDkk1 mice ( Figure 2C). This is the first demonstration that Wnt signaling contributes to LTD expression. Thus, Wnt deficiency induced by Dkk1 expression facilitates LTD and blocks LTP at SC-CA1 synapses in the adult hippocampus.
Dkk1 Triggers Degeneration of Excitatory Synapses in the Adult Hippocampus
To determine the impact of Dkk1 expression on synapse stability, we measured excitatory synapses by the co-localization of pre-and postsynaptic markers (vGlut1 and PSD95, respectively) in the CA1 stratum radiatum. iDkk1 mice exhibited fewer excitatory synapses ($40% decrease; Figure 3A). Consistently, we observed a similar decrease in the number of asymmetric (i.e., excitatory) synapses in the CA1 stratum radiatum by electron microscopy ( Figure 3B). Thus, Dkk1 triggers the degeneration of glutamatergic synapses in the adult hippocampus. To evaluate neuronal connectivity, we recorded miniature excitatory postsynaptic currents (mEPSCs) using whole-cell patch-clamp recordings from CA1 neurons. Although no changes in mEPSC amplitude were observed, we found a significant decrease in mEPSC frequency ($40%) in iDkk1 mice ( Figures 3C and 3D), consistent with a decrease in excitatory synapse number.
In contrast, induced Dkk1 expression did not affect the number of inhibitory synapses in the CA1 region, as determined by co-localization of the pre-and postsynaptic markers vGat and gephyrin ( Figure 4A). Consistently, the amplitude and frequency of miniature inhibitory postsynaptic currents (mIPSCs) in hippocampal CA1 neurons were unaffected by Dkk1 expression (Figure 4B). Thus, Dkk1 specifically affects the integrity of excitatory synapses without altering inhibitory synapses.
Dkk1 Triggers Synaptic Disassembly by Blocking
Canonical Wnt Signaling and Activating the RhoA-Rock Pathway Dkk1 is a known specific Wnt antagonist that blocks canonical Wnt signaling [11,12]. Wnt ligands bind to Frizzled (Fz) receptors (B) Input-output curve shows fEPSP slope in CA1 in response to different stimulus intensity of Schaffer collateral axons (12 slices from eight control and 11 slices from seven iDkk1 mice; *p < 0.05; repeated-measures ANOVA). (C) A weak low-frequency stimulation (weak LFS) induces short-term depression in control slices and LTD in iDkk1 slices (11 slices from six controls and nine slices from five iDkk1 mice; *p < 0.05; repeated-measures ANOVA; see also Figure S4). Data are represented as mean ± SEM. and the co-receptors LRP6, resulting in the inhibition of Gsk3b-mediated phosphorylation and stabilization of b-catenin, which translocates to the nucleus and activates transcription [51] ( Figure S5). In contrast, in the presence of Dkk1, binding of Wnts to Fz/LRP6 is blocked, resulting in enhanced Gsk3b-mediated b-catenin degradation by the proteasome pathway ( Figure S5) [12,51]. Thus, Dkk1 effectively blocks the function of several Wnts that signal through the LRP6 receptor. To investigate the impact of Dkk1 expression in canonical Wnt signaling, we evaluated b-catenin levels. Indeed, expression of Dkk1 resulted in fewer b-catenin puncta in the CA1 stratum radiatum of iDkk1 mice ( Figures 5A and 5B), indicating that Dkk1 blocks the canonical Wnt-b-catenin pathway. Co-localization with the synaptic marker vGlut1 showed that most b-catenin puncta were extrasynaptic, indicating that the loss of b-catenin induced by Dkk1 was not due to synapse loss. These results suggest that Dkk1 expression blocks canonical Wnt signaling in the adult hippocampus.
Next, we evaluated whether Dkk1-mediated synaptic loss is due to blockade of canonical Wnt signaling. We used the specific Gsk3 inhibitor BIO (6-bromoindirubin-3 0 -oxime), which activates the Wnt pathway downstream of Dkk1 [52,53]. Using a concentration of BIO, which does not affect synapse number on its own ( Figures 5C and 5D), we found that this Gsk3 inhibitor partially occluded Dkk1-induced synapse disassembly ( Figures 5C and 5D), suggesting that Dkk1 induces synapse loss through blockade of the Wnt-Gsk3b pathway but additional pathways might be involved. Dkk1 is mostly known as a specific and potent inhibitor of the Wnt-Gsk3b pathway; however, some studies have suggested that Dkk1 could activate non-canonical Wnt pathways [16,[54][55][56]. Although a role for the RhoA-Rock pathway in Dkk1 has not been reported in neurons, this cascade is of particular interest because it has been implicated in synaptic plasticity, learning, and memory and in Ab-mediated synapse loss [57][58][59]. We therefore examined the role of this pathway in Dkk1-mediated synapse degeneration. Exposure to Y27632, a specific Rock inhibitor, partially prevented Dkk1-mediated synapse loss (Figures 5C and 5E). Given the partial protection by both Gsk3b inhibition and Rock inhibition on Dkk1-mediated synapse degeneration, we examined the combined effect of Gsk3b and Rock inhibitors and found complete blockade of Dkk1-induced synapse loss ( Figures 5C and 5F). These results demonstrate a novel role for RhoA-Rock pathway in Dkk1 function and suggest that Dkk1 promotes synapse disassembly by blocking canonical Wnt signaling and activating the RhoA-Rock pathway.
Synaptic Loss, Plasticity Defects, and Behavioral Impairment Are Reversible Diagnosis of neurodegenerative diseases is often made after substantial loss of synaptic connectivity has occurred. Thus, understanding the reversible nature of synaptic degeneration is crucial for developing therapies for the treatment of cognitive impairments in neurodegenerative diseases. We therefore examined whether Dkk1-mediated synapse loss and network dysfunction is reversible. We performed in vivo on-off experiments ( Figure 6A), in which Dkk1 expression was induced for 2 weeks with doxycycline (On Doxy), followed by withdrawal of doxycycline for a further 2 weeks (Off Doxy). RT-PCR revealed that Dkk1 was expressed during the ''on'' period, but not after the ''off'' period, confirming that Dkk1 expression is tightly regulated by doxycycline ( Figure 6B). Remarkably, the number of excitatory synapses fully recovered to control levels after doxycycline withdrawal (Figures 6C and 6D). These results demonstrate that, even after significant degeneration, the number of synaptic connections can be restored when Dkk1 expression is turned off in the adult hippocampus.
We then evaluated whether defects in basal transmission, long-term plasticity, and long-term memory could be reversed in iDkk1 mice. We found that cessation of Dkk1 expression resulted in full recovery of basal synaptic transmission as indicated by the overlapping input-output curves from control and iDkk1 mice ( Figure 6E). Notably, TBS fully induced LTP in iDkk1 mice after termination of Dkk1 expression ( Figure 6F). Moreover, weak LFS induced short-term depression without inducing LTD in both control and iDkk1 mice ( Figure 6G). Finally, using the contextual fear-conditioning test, we found that turning off Dkk1 expression in iDkk1 mice completely recovers their ability to form long-term memory, as the percentage of freezing time was similar to control mice ( Figure 6H). Taken together, these studies show the remarkable capacity of the adult hippocampus to regenerate synapses that integrate into functional neuronal circuits. They also demonstrate that synapse degeneration can be reversed in the adult mouse brain by modulating Wnt signaling.
DISCUSSION
Here, we report that deficiency in Wnt signaling by inducibly expressing the specific Wnt antagonist Dkk1 in the adult hippocampus triggers the loss of excitatory synapses in CA1 neurons, impairs synaptic plasticity, and alters hippocampal-dependent function. These defects occur in the absence of cell death and require the combined activation of Gsk3b and Rock. Notably, Dkk1-induced synaptic defects are fully reversed upon cessation of Dkk1 expression. Our findings demonstrate that iDkk1 mice provide a unique model system to study the in vivo impact of deficient Wnt signaling on synapse vulnerability and to elucidate the molecular mechanisms that contribute to synapse regeneration after substantial synapse loss and dysfunction.
In the adult hippocampus, Dkk1 expression blocks Wnt signaling without affecting cell viability or the stem cell niche. Previous studies have shown that Dkk1 can promote cell death in models of AD, epilepsy, and ischemia [13,29,60,61] and affect adult neurogenesis by modulating the generation of immature neurons in the adult DG [30]. However, we found no evidence of increased cell death or an effect on the number of newborn neurons in the adult hippocampus of iDkk1 mice. This could be attributed to low levels of Dkk1 expression after 2 weeks induction of this protein. Given the direct effect of Wnts on synapses [62][63][64], our results suggest that Dkk1 induces synaptic vulnerability by directly targeting synapses.
Blockade of Wnt signaling with Dkk1 specifically affects excitatory synapses in the adult hippocampus, resulting in decreased mEPSC frequency and reduced excitatory synaptic transmission. In contrast, Dkk1 does not affect the number of inhibitory synapses or mIPSC frequency and amplitude. In the adult striatum, Dkk1 also induces the loss of excitatory synapses [22]. Together, these results highlight the crucial role for Wnt signaling in the maintenance of functional excitatory synapses in the adult brain. Although the mechanism by which Dkk1 specifically affects excitatory, but not inhibitory, synapses remains unknown, recent studies showed that LRP6 is predominantly present at excitatory synapses [65] and that deficiency in LRP6 affects excitatory synapses in the hippocampus [10]. These results suggest that Dkk1 acts through LRP6, which is upstream of the Wnt-Gsk3b pathway. Consistent with the inhibition of this pathway, the number of b-catenin puncta decreases in hippocampal CA1 following Dkk1 expression. Thus, induced expression of Dkk1 compromises the canonical Wnt pathway. Dkk1 induces synapse degeneration by modulating the Wnt-Gsk3b and the Rock pathways. Our studies reveal that inhibiting Gsk3b with BIO partially blocks Dkk1-mediated synapse disassembly, suggesting that additional pathways might be involved. Previous studies showed that activation of the RhoA-Rock pathway leads to spine loss and mediates Ab-induced synapse loss [57][58][59]. Interestingly, we found that Rock inhibition partially blocks Dkk1-induced synapse degeneration. In contrast, inhibition of both Gsk3b and Rock completely protects synapses against Dkk1. Therefore, we have identified Rock as a novel downstream target for Dkk1. How Dkk1 activates Gsk3b and Rock pathways is unknown. Both signaling cascades could influence the stability of the synapse by modulating different targets, such as b-catenin and microtubules in the case of Gsk3b or the actin cytoskeleton through the Rock pathway. Alternatively, both pathways could interact as recently reported for the role of Wnts in cell migration [66]. Future studies will elucidate the downstream events by which these two pathways contribute to Dkk1-mediated synapse vulnerability.
Induced Dkk1 expression affects long-term plasticity and memory. iDkk1 mice exhibit impaired hippocampus-dependent function as demonstrated by defects in contextual fear memory and spatial learning and memory. These results are in agreement with a previous study suggesting a role for Wnt signaling in memory [16,67]. Memory deficits have been associated with defects in long-term plasticity in the hippocampus [68][69][70]. Consistently, iDkk1 mice exhibit a significant impairment in LTP, a defect that could be due to the loss of 40% of excitatory synapses [47] and/or to the impaired ability of remaining synapses to respond to LTP induction. We also demonstrate a novel function for Wnt signaling in LTD. Previous studies showed that Gsk3b activation suppresses LTP [71] and enhances LTD [72], suggesting a role for Gsk3b downstream of Dkk1-mediated synaptic dysfunction.
Understanding the molecular pathways that promote the regeneration of synapses that integrate into networks is crucial for developing effective therapies to promote functional recovery. Here, we report that synapse loss, defects in synaptic plasticity, and memory deficits can be fully restored in iDkk1 mice after cessation of Dkk1 expression. Our findings demonstrate the remarkable capacity of adult neurons to regenerate functional circuits after substantial synapse loss and highlights that Wnt signaling is a targetable pathway in neurodegenerative diseases.
Animals
Experiments were performed according to the Animals Scientific procedures Act UK (1986). Double transgenic mice (iDkk1) were obtained as described in [22]. Adult (3-6 months old) iDkk1 and control mice (tetO-Dkk1, CaMKIIa-rtTA2, or wild-type littermates) were fed with pellets containing 6 mg/kg doxycycline (Datesand Group) ad libitum for 2 weeks, unless otherwise indicated. For the Figure S5). (B) Graph shows quantification of b-catenin puncta (***p % 0.001; ANOVA; four mice per genotype). (C) Confocal images show the presence of excitatory synapses (co-localized vGlut1 and Homer1 puncta; white arrows) in mature hippocampal neurons exposed to control or Dkk1 and specific Gsk3 and Rock inhibitors as indicated. The scale bar represents 2 mm. (D-F) Quantification of excitatory synapses per 100 mm dendrite after treatment with Dkk1 and with BIO, a Gsk3 inhibitor (D), with a Rock inhibitor, Y27632 (E), or with both BIO and Y27632 (F; *p < 0.05; one-way ANOVA test; n = 3 independent experiments per condition). Data are represented as mean ± SEM. on-off experiment, 2 weeks of doxycycline feeding was followed by 2 weeks of feeding with the original diet. Males were used for electrophysiological and behavioral experiments, whereas both genders were used for cellular biology experiments. See the Supplemental Experimental Procedures for more details.
Hippocampal Culture, Cell Transfection, and Drug Treatment Hippocampal cultures were prepared from embryonic day 18 (E18) embryos of Sprague-Dawley rats as described previously [73] and maintained for 21 days in vitro (DIVs). Purified recombinant Dkk1 (200 ng/mL; PeproTech) was applied to cells for 2 hr in the presence or absence of the Gsk3 inhibitor BIO (200 nM; BioVision Technologies) and ROCK inhibitor Y27632 (10 mM; Selleckchem). See the Supplemental Experimental Procedures for further details.
Immunofluorescence Staining
Brain slices from control and iDkk1 mice were incubated in blocking solution (10% donkey serum and 0.02% v/v Triton X-100 in PBS) for 4 hr at room temperature (RT). Primary antibodies were incubated overnight at 4 C. Secondary antibodies conjugated with Alexa 488, 568, or 647 (1:600; Invitrogen) were incubated at RT for 2 hr. In some experiments, brain sections were incubated with Hoescht for 5 min. Samples were washed in PBS and mounted in Fluoromount-G (SouthernBiotech).
Hippocampal neurons were fixed in 4% paraformaldehyde (PFA) in PBS for 20 min at RT, permeabilized for 5 min in 0.05% v/v Triton X-100 in PBS, and blocked in 5% BSA for 1 hr. Primary antibodies and secondary antibodies were each incubated for 1 hr at RT. Samples were washed in PBS and mounted in FluorSave Reagent (Millipore). See the Supplemental Experimental Procedures for more details.
Image Acquisition and Analyses
For evaluation of synaptic puncta, stacks of eight equidistant planes (0.2 mm; 76 3 76 nm/pixel) from hippocampal slices and cultured neurons were acquired on an Olympus FV1000 confocal microscope using a 60 3 1.35 numerical aperture (NA) oil objective. Four to seven fields were taken per brain slice, and three to four slices were analyzed per mouse. For hippocampal neurons, eight to ten image stacks of EGFP-transfected cells were taken per condition. Analysis was performed in Volocity software (PerkinElmer). See the Supplemental Experimental Procedures for more details.
Electrophysiology
For field potential recordings, parallel bipolar stimulation electrodes were placed in the stratum radiatum of the CA1 region and Schaeffer collateral fibers were stimulated with 0.1 ms duration constant-current paired-pulses (pulse interval 50 ms) delivered to the pathway at intervals of 10 s. Stimulus current was adjusted at the beginning of each recording to give a response approximately 50% of the maximum fEPSPs slope, after recording an input-output curve. fEPSPs were monitored using low-resistance glass pipettes (1-2 MU), filled with 4 mM NaCl in ACSF. Slices were subjected to a 15-20 min period of pre-LTP/pre-LTD baseline measurement every 10 s. Provided that the control response did not change by more than 5% during this 15-20 min period, LTP or LTD was induced. LTP was induced by a TBS protocol, which involved delivering two TBSs at an interval of 10 s, and each TBS was composed of five trains of stimuli at intervals of 200 ms, where each train contained four stimuli at 100 Hz. Two protocols of LFS consisting of two trains of 900 pulses delivered at 2 Hz with a 2.5 min gap (strong LFS) or one train of 900 pulses delivered at 2 Hz (weak LFS) were used to induce a LTD. Stimulus intensity for the TBS and LFS was the same as baseline recordings. Paired-pulse fEPSPs (20 Hz) were recorded at intervals of 10 s for at least 50 min after delivery of the TBS or LFS, and the slope of each fEPSP was measured. fEPSP-PPR was calculated as the ratio of the slope of the second to the first fEPSP. Recordings were made using an Axopatch 200B amplifier, filtered (1 kHz) and digitized (10 kHz), and then analyzed using WinEDR software or WinWCP software (freely available at http://spider.science.strath.ac.uk/ sipbs/software_ses.htm). For these experiments and patch-clamp recordings, see the Supplemental Experimental Procedures for further information.
Behavioral Studies
For all behavioral tests, adult male mice were handled daily for approximately 2 min, at least 4 days before the beginning of the test. Throughout experimentation and data analysis, the experimenter was blind to genotype. MWM, contextual fear conditioning, T-maze spontaneous alternation, open field, and elevated plus maze tasks are described in the Supplemental Experimental Procedures.
Statistical Analyses
For behavioral analyses, each mouse group consisted of at least seven animals. For immunofluorescence, data were generated from three or more independent experiments, each with one to four mice per genotype. All results were expressed as mean ± SEM. Statistical significance was calculated on the basis of a Student's t test, one-way ANOVA, or ANOVA for repeated measures when samples were normally distributed, followed by Scheffe or Bonferroni posteriori comparisons. Mann-Whitney or Kruskal-Wallis tests were used for non-normally distributed data followed by Dunn-Sidak posteriori comparisons (*p < 0.05, ***p % 0.001, **p % 0.01). | 6,879.6 | 2016-10-10T00:00:00.000 | [
"Biology"
] |
Signal Transmission in a Human Body Medium-Based Body Sensor Network Using a Mach-Zehnder Electro-Optical Sensor
The signal transmission technology based on the human body medium offers significant advantages in Body Sensor Networks (BSNs) used for healthcare and the other related fields. In previous works we have proposed a novel signal transmission method based on the human body medium using a Mach-Zehnder electro-optical (EO) sensor. In this paper, we present a signal transmission system based on the proposed method, which consists of a transmitter, a Mach-Zehnder EO sensor and a corresponding receiving circuit. Meanwhile, in order to verify the frequency response properties and determine the suitable parameters of the developed system, in-vivo measurements have been implemented under conditions of different carrier frequencies, baseband frequencies and signal transmission paths. Results indicate that the proposed system will help to achieve reliable and high speed signal transmission of BSN based on the human body medium.
Introduction
Signal transmission based on the human body, also termed intra-body communication (IBC), is a technology using the human body as the transmission medium for electrical signals [1]. Compared with short distance wireless communication technologies, such as Bluetooth and Zigbee, etc., this technology has several novel characteristics, which can be summarized as follows: (1) due to the fact OPEN ACCESS that the signal mainly transmits within the human body and little radiation leaks out, it avoids the disturbance of environmental electromagnetic noise and can achieve comparatively higher data rates; (2) as a special type of cable communication using the human body as transmission medium, it needs comparatively lower energy consumption [2]; (3) using this technology, communication can be started or stopped by touching, standing and sitting down of the human body [3]. Due to the advantages mentioned above, it is believed that the signal transmission technology based on the human body medium will offer significant advantages in BSNs used for healthcare [1,4] and other related fields.
Sensors used for signal detection are very important for achieving reliable signal transmission based on the human body. Firstly, to guarantee the safety of the human body, signals injected into the human body should be low [1,5]. Moreover, the impendence of human body also results in signal attenuation [6], therefore, sensors used for detecting the signal transmission within the human body should have high sensitivity. Secondly, with the influence from the floating ground of wearable electronical devices, signal transmission within the human body may suffer from great distortion, therefore, anti-interference properties are another important characteristic required by the sensor. Recently, two kinds of sensors have been chosen, the electrical sensor and electro-optical (EO) sensor. However, due to the fact that the electrical sensor has comparatively low input impedance and is easy to be interfered with by electromagnetic noise, the typical signal transmission distance based on this kind of sensor is only approximately 30 cm and the corresponding signal transmission rate is limited to 40 kbps [3]. As for the EO sensor, due to its extremely high input impedance, the influence of the electrical noise can be greatly decreased. Moreover, the ground electrode of the EO sensor is electrically isolated from the electronic circuits, which eliminates the influence of the floating ground potential [7]. As a result, both the noise and the distortion of the receiving signal can be greatly decreased, thereby high signal transmission rate can be achieved [3]. Therefore, EO sensors are believed to be suitable for detecting signal transmissions within the human body.
On the other hand, recent works with respect to the EO sensor used for signal transmission based on the human body medium mainly focus on sensors based on bulk electro-optical crystals [3,7,8]. This kind of EO sensor has an additional phase delay caused by natural binary refraction, which is very sensitive to the temperature that thereby influences the signal transmission quality. Moreover, the phase delay of this kind of EO sensor depends on the aspect ratio of EO crystals, which results in the comparatively big size of the EO sensor and will limit its application in BSNs. In our previous works, we proposed a novel signal transmission method based on the human body medium by using a Mach-Zehnder EO sensor, which will help to achieve signal transmission based on the human medium with the characteristics of good temperature dependence properties, small size and low power consumption [9]. In this paper, we present a signal transmission system based on the proposed method, and the frequency response properties as well as the parameters of the proposed system have been discussed. Firstly, we described the proposed signal transmission system, which consists of a transmitter, a Mach-Zehnder EO sensor and a corresponding receiving circuit. Secondly, the advantage with respect to the frequency response of the signal transmission based on the human body medium by using the proposed system has been verified by using in-vivo measurements. Furthermore, in order to determine the suitable parameters, the corresponding in-vivo signal transmission experiments with different carrier frequencies, baseband frequencies and multi-paths were implemented. Results indicate that the proposed method will help to achieve reliable and high speed signal BSN transmissions for healthcare and the other related fields. The rest of the paper is organized as follows. Section 2 describes the signal transmission system based on human body medium. Section 3 mainly focuses on the experiments and results discussion. Section 4 concludes the paper.
Signal Transmission System
Generally, the signal transmission approaches based on the human body medium can be divided into two types, which include electrostatic coupling type and galvanic coupling type [6,10]. Compared with the latter type, the former has the characteristic of less signal attenuation, which is very important for decreasing power consumption. Therefore, in our investigation the electrostatic coupling type was chosen as the approach for signal transmission based on the human body medium.
System Structure
The developed signal transmission system based on the human body medium is composed of a transmitter, a Mach-Zehnder EO sensor and a receiving circuit, as shown in Figure 1. In the developed system, a baseband signal is input to the transmitter through the input port first, and then it is modulated and amplified in the transmitter. The processed signal is coupled into the human body through the signal electrode, while it is also coupled into the earth ground through the ground electrode. Subsequently, signal transmission within the human body is received by a Mach-Zehnder EO sensor through the signal electrode. In the EO sensor, signal is coupled to the ground electrode of the Mach-Zehnder modulator, and then it transmits to the ground electrode of the receiver. Finally, signal is coupled into the earth ground, and thereby a signal loop has been established. On the other hand, the functions of the Mach-Zehnder electro-optical modulator can be described as follows: as shown in Figure 1, the Mach-Zehnder EO sensor consists of a laser diode, a Mach-Zehnder EO modulator and a photodetector. As the laser light (λ = 1,550 nm) from the laser diode passes through the two arms of the Mach-Zehnder EO modulator, the refractive index of the arm changes with the voltage of the received signal which is applied on it. Meanwhile, the modulator sums the optical waves from each arm, and thereby the phase change is converted into amplitude change. Subsequently, the change of optical amplitude is converted into the corresponding change of electric signals by the photodetector. Finally, the output signal of the photodetector is processed by the receiving circuit. As shown in Figure 1, the ground electrode is insulated from the photodetector and the receiving circuit, which will help to decrease signal noise and waveform distortion.
Transmitter
In the developed signal transmission system, the transmitter is mainly used for modulating the baseband signal and coupling it into the human body. As shown in Figure 2, the transmitter can be divided into a Field Programmable Gate Array (FPGA) module, amplifying and filtering module, and electrostatic coupling electrode. Additionally, the baseband signal is stored in the FPGA module, thereby no input port is integrated in the developed transmitter. The structure and function of the three modules can be described as follows: the FPGA module consists of a FPGA (ALTERA EP1C6, EP3C40), a Synchronous Dynamic Random Access Memory (SDRAM), a Programmable Read Only Memory (PROM) and a battery, etc. Firstly, it generates a carrier signal with the required frequency, then the carrier signal is modulated by the baseband signal which is stored in the PROM (16 Mbit) with the Differential Binary Phase Shift Keying (DBPSK) method. The amplifying and filtering module consists of a voltage amplifier and a band-pass filter. Due to the fact the Mach-Zehnder electro-optic modulator has extremely high input impedance, therefore, rather than using a complex constant current source to provide enough receiving signal amplitude in the transmitter based on electric sensor [11], here we can use a simple voltage amplifier with adjustable gain for amplifying the modulated signal. Additionally, a 500 kHz-30 MHz band-pass filter is added for filtering out the useless harmonic noise. Finally, the processed signal is coupled into human body by a customized copper electrode, which consists of a circular signal electrode with a radius of 10 mm and a rectangular ground electrode with a size of 10 cm × 2 cm. The signal electrode is parallel to the ground electrode, while the distance between them is 10 cm. Meanwhile, the signal electrode is connected with the ground electrode by using rubber material, which has a relative permittivity of 2.6.
Mach-Zehnder EO Sensor
As shown in Figure 1, our Mach-Zehnder EO sensor was composed of a laser diode, a Mach-Zehnder EO modulator and a photodetector. The principle of the Mach-Zehnder EO sensor can be described as follows: as shown in Figure 3, supposing that the electric field of the incident input light E in = Aexp(jωt), the electric fields in the two arms (E a and E b ) of the Mach-Zehnd EO modulator can be described as . Therefore, the electric field corresponding to the emergent light of the Mach-Zehnder EO modulator (E out ) can be expressed as [12]: 3 3 where φ a and φ b represent the phases of the arm a and b, respectively, l is the length of the EO crystal, λ 0 is the optical wavelength, n e and n o are the extraordinary refractive index and ordinary refractive index of EO crystal, respectively, Γ is the overlap integral factor representing the interaction between the electric field applied on the electrodes and the light wave field, L is the electrode length, G is the distance between the signal electrode and the ground electrode of the Mach-Zehnder EO modulator, γ 33 and γ 13 are the electro-optic coefficients of the EO crystal, V e represents the voltage applied on the EO crystal which represents the signal voltage transmitting within the human body, and φ 0 is the phase difference used for setting the operating point of the Mach-Zehnder EO modulator.
Subsequently, the relationship between the input optical power (P in ) and the output optical power (P out ) of the Mach-Zehnder EO modulator can be expressed as: Finally, the output voltage of the photodetector (V out ), which represents the output voltage of the whole Mach-Zehnder EO sensor, can be expressed as [13]: where k is the insertion loss of the modulator, and S and R k represent the conversion efficiency and the transimpedance of the photodetector, respectively. Therefore, the receiving signal voltage (V e ) transmitting within the human body can be achieved by measuring the output voltage of the photodetector (V out ).
Receiving Circuit
The receiving circuit is mainly used for amplifying, filtering and demodulating the signal output from the Mach-Zehnder EO sensor. As shown in Figure 4, our receiving circuit consists of a variable gain amplifier, a band-pass filter, a FPGA module and a battery, etc. In order to provide a signal with appropriate amplitude for the demodulation module based on FPGA, a variable gain voltage amplifier with functions of coarse adjustment and accurate adjustment was developed, in which the coarse adjustment (linear gain control) can be used for controlling gain artificially, while the accurate adjustment (index gain control) makes it possible to control the output signal voltage automatically according to the signal from FPGA through a Frequency to Voltage (F/V) converter circuit. Moreover, an active band-pass filter was developed for filtering noises. Finally, the receiving modulated signal is demodulated by the FPGA module, and thereby the original baseband signal can be achieved at the output interface of the receiving circuit.
Frequency Response
The frequency response of the signal transmission system based on the human body medium is an important factor that influences the quality of the receiving signal. In our investigation, the frequency response of the signal transmission based on the proposed Mach-Zehnder EO sensor was determined, while the corresponding in-vivo measurements with respect to the signal transmission based on electronic sensor was also carried out under the same conditions for comparison.
Method
Our experimental device, as shown in Figure 5(a), includes a transmitter, a proposed sensor which consists of a laser diode, a Mach-Zehnder EO modulator (10 Gb/s intensity modulator, made by JDS Uniphase Corporation, Milpitas, CA, USA) and a photodetector. Moreover, a scope meter (FLUKE 196C) was used for displaying the receiving signal. Additionally, to simulate the actual application of BSN, the transmitter and the scope meter were powered by a battery module. On the other hand, a 27 year-old male with 80 kg weight and 182 cm height was chosen as the subject. Figure 5(b) shows the experiment device with respect to the frequency response of the signal transmission system based on electronic sensor, in which the same transmitter shown in Figure 5(a) was used, while the scope meter serves as an electronic sensor. Figure 6 shows the in-vivo measurement results with respect to the frequency response of the signal transmission system based on the proposed EO sensor and the electronic sensor, respectively. It can be found from Figure 6 that signal frequency within the 2 MHz-30 MHz range has comparatively less effect on the signal attenuation in the signal transmission based on the Mach-Zehnder EO sensor along the different signal transmission paths, which include arm (20 cm), left arm-right arm (120 cm), torso-arm (70 cm) and leg-arm (180 cm). In contrast, it has comparatively greater effect on the signal attenuations of the signal transmission based on an electronic sensor. As shown in Figure 6(a), the results corresponding to the (20 cm) arm path with respect to the Mach-Zehnder EO sensor almost remain invariant within the frequency range of 2 MHz-30 MHz, while the maximum deviation of the results is only 2.85 dB. However, the corresponding results with respect to the electronic sensor have comparatively greater variation, while the maximum deviation is up to 20.70 dB. Additionally, even though both the signal attenuation curves shown in Figure 6(a) decrease gradually with the increase of signal frequency from 500 kHz to 2 MHz, the results referring to the Mach-Zehnder EO sensor are far less than the corresponding results referring to the electronic sensor. On the other hand, as shown in Figure 6(b,c,d), a similar phenomenon can also be found from the results corresponding to the paths of arm-right arm, torso-arm and leg-arm, which also indicate that the results referring to the Mach-Zehnder EO sensor have less variance in the frequency range of 2 MHz-30 MHz, while the results referring to the electronic sensor have comparatively greater variation. This phenomenon indicates that compared with the electrical sensor-based signal transmission, the proposed Mach-Zehnder EO sensor has a comparatively steady frequency response in the frequency range of 2 MHz-30 MHz. Meanwhile, the above phenomenon can be explained by the influences of the extremely high input impedance of the EO sensor as well as the electrically isolation between the ground electrode of the EO sensor and the receiving circuit [7,9]. Finally, if the signal frequency or the carrier frequency is set in this range, high quality of the signal transmission based on the human body medium will be expected. (c) (d)
Method
In order to verify the functions and determine the suitable parameters of the proposed signal transmission system used in the BSN, the corresponding in-vivo experiments with different carrier wave frequencies, baseband frequencies and multi-paths were implemented using the experimental device shown in Figure 7. Compared with the experimental device shown in Figure 5(a), a receiving circuit mentioned in Section 2.4, which includes a variable gain amplifier, a band pass filter and a FPGA module, was added to the experimental device. In our experiments, the signal "1010101010" was chosen as the baseband signal. In the FPGA module of the transmitter, the baseband signal was converted into the corresponding relocatable code singal (110011001100) first, and then it modulated the carrier wave signal with the DBPSK method. Subsequently, the modulated signal was coupled into the human body at the transmitting terminal and received by the proposed Mach-Zehnder EO sensor at the receiving terminal. Finally, the signal was processed and demodulated by the receiving circuit.
Influence of Carrier Frequency
The experiments mainly focus on the influence of carrier frequency on the signal transmission based on the human body medium, in which 500 kHz was chosen as the baseband frequency (BF), while the carrier frequency (CF) was set as 1, 4, 8 and 16 MHz, respectively. As shown in Figure 7, the signal was transmitted from the left arm and received at the right arm. Figure 8(a) shows the modulated signal received by the Mach-Zehnder EO sensor under the condition that the CF = 1 MHz. We can find from Figure 8(a) that the modulated signal has distortion, especially at the junction between the signals representing "0" and "1". Meanwhile, error codes can be found in the corresponding demodulated signal shown in Figure 8(e), while the measurement result of bit error rate (BER) is 1.25%. Furthermore, when the CF = 4 and 8 MHz, the quality of the modulated signal was increased, as shown in Figure 8(b,c). As a result, less error code has been found in the corresponding demodulated signal shown in Figure 8(f,g), while the measurement results of BER are 0.78% and 0.56%, respectively. On the other hand, the amplitudes of the demodulated signal shown in Figure 8(d) corresponding to CF = 16 MHz are decreased, and error codes can be found in the corresponding demodulated signal shown in Figure 8(h). Moreover, the BER measurement result is up to 1.62%. The phenomenon mentioned above can be explained as follows: (1) due to the fact that signal with the frequency less than 2 MHz generally results in comparatively high signal attenuation according to the results shown in Figure 6, thereby leading to the increasing of BER; (2) compared with the other three modulated signals shown in Figure 8, the modulated signal corresponding to CF = 16 MHz has more high frequency spectral. Due to the fact that signal attenuation also increases when signal frequency is higher than 20 MHz, thereby this results in the increase of the measured BER corresponding to CF = 16 MHz. Finally, as a conclusion, the carrier frequency used in BSN based on the human body medium should be higher than 1 MHz, while 8 MHz is a suitable frequency for CF.
Influence of Baseband Frequency
The experiments with respect to the influence of baseband frequency over the signal transmission system based on the human body medium has been implemented using the experimental device shown in Figure 7. Meanwhile, the carrier frequency was set as 8 MHz, and the baseband frequency was set as 100 kHz, 1 MHz, 4 MHz and 8 MHz, respectively. Figure 9(a) shows the modulated signal measured by the proposed EO sensor under the condition that BF = 100 kHz, while Figure 9(e) shows the corresponding demodulated signal processed by the receiving circuit. We can find from Figure 9(a,e) that both the modulation and the demodulation can be implemented correctly in this case, while the measured BER is 0.59%. Meanwhile, Figure 9(b) shows the modulated signal under the condition that BF = 1 MHz, while Figure 9(f) is the corresponding demodulation signal. we can find from Figure 9(b,f) that even the pulses corresponding to the conversion process (from "1" to "0" or from "0" to "1") has a comparatively smaller amplitude, the correct demodulation signal can also be achieved by using the receiving circuit integrated with variable gain amplifier, while the measured BER is 0.49%. Moreover, as shown in Figure 9(c,d), when BF was set as 4 MHz and 8 MHz, the pulses corresponding to the conversion process have almost the same amplitudes as the pulses representing "1" or "0". Additionally, even some wave distortions can be found from the modulated signals shown in Figure 9(c,d), these distortions were limited in an acceptable range and thereby results in the correct demodulation signals shown in Figure 9(g,h). As a result, the measured BER's corresponding to BF = 4 and 8 MHz are 0.68% and 0.73%, respectively. Figure (a,b,
Influence of Signal Transmission Path
In the signal transmission system based on the human body medium used in a BSN for healthcare, generally, biomedical sensors and the corresponding transmitters are located at different positions on the human body, and send their data to a sink node located somewhere on the human body (such as the wrist). Therefore, the influence of the signal transmission path within the human body should be considered. As shown in Figure 10, the receiving electrode was fixed on the position R of the human body, while the transmitting electrode was fixed on the positions T 1 , T 2 , T 3 and T 4 , respectively, thereby resulting in signal transmission paths with different lengths, which include the arm path (T 1 -R, 20 cm), torso-arm (T 2 -R, 70 cm), leg-arm (T 3 -R, 180 cm) and left arm-right arm (T 4 -R, 120 cm) in our experiment. (e) (f) (g) (h) Figure 11(a-d) show the modulated signals corresponding to the four signal transmission paths (T 1 -R, T 2 -R, T 3 -R and T 4 -R) measured by the proposed EO sensor under the conditions that CF = 8 MHz, BF = 1 MHz, while Figure 11(e,f,g,h) are the demodulated signals corresponding to Figure 11(a,b,c,d), respectively. According to Figure 11(a), we can find that the modulated signal has comparatively bigger amplitude. This phenomenon can be explained by the fact that signal transmission within the arm has less attenuation compared with the other three paths. Generally, a shorter length of the signal transmission path will result in a smaller signal attenuation [14,15]. Moreover, the comparatively high quality of the demodulated signal can also be achieved in this case, as shown in Figure 11(e). As a result, the measured BER is only 0.32% in this case. On the other hand, it can be found from Figure 11(b,c,d) that the modulated signals corresponding to the paths of torso-arm, leg-arm and left arm-right arm have similar waveforms and amplitudes. Meanwhile, according to the corresponding demodulated signals shown in Figure 11(f,g,h), all the signals shown in Figure 11(b,c,d) can be demodulated correctly, and the measured BER's corresponding to Figure 11(f,g,h) are 0.47%, 0.62% and 0.49%, respectively. Therefore, it can be deduced that signal transmission path generally has less influence on the quality of the signal transmission based on the human body medium.
Conclusions
In this paper, a BSN signal transmission system based on the human body medium using a Mach-Zehnder EO sensor has been proposed. We demonstrated that, compared with the signal transmission system based on an electrical sensor, the proposed system based on a Mach-Zehnder OE sensor has a steady frequency response in the frequency range of 2 MHz-30 MHz. Furthermore, the corresponding in-vivo signal transmission experiments with different carrier wave frequencies, baseband frequencies and multi-paths were implemented, and some conclusions can be drawn as follows: (1) generally, the carrier frequency used in BSN based on the human body medium should be higher than 1 MHz, while 8 MHz is a suitable frequency; (2) using the proposed method, signal transmission with the BF range of 100 kHz-8 MHz can be achieved under the condition of CF = 8 MHz; (3) the signal transmission path has less influence to the quality of the discussed signal transmission based on the human body medim (electrostatic coupling type). Our results indicate that the proposed system will help to achieve reliable and high speed signal transmission based on the human body medium in BSNs used for healthcare and other related fields. | 5,714.8 | 2012-11-30T00:00:00.000 | [
"Computer Science"
] |
Distributed under Creative Commons Cc-by 4.0 Global Sensitivity Analysis of a Dynamic Model for Gene Expression in Drosophila Embryos
It is well known that gene regulation is a tightly controlled process in early organismal development. However, the roles of key processes involved in this regulation, such as transcription and translation, are less well understood, and mathematical modeling approaches in this field are still in their infancy. In recent studies, biologists have taken precise measurements of protein and mRNA abundance to determine the relative contributions of key factors involved in regulating protein levels in mammalian cells. We now approach this question from a mathematical modeling perspective. In this study, we use a simple dynamic mathematical model that incorporates terms representing transcription, translation, mRNA and protein decay, and diffusion in an early Drosophila embryo. We perform global sensitivity analyses on this model using various different initial conditions and spatial and temporal outputs. Our results indicate that transcription and translation are often the key parameters to determine protein abundance. This observation is in close agreement with the experimental results from mammalian cells for various initial conditions at particular time points, suggesting that a simple dynamic model can capture the qualitative behavior of a gene. Additionally, we find that parameter sensitivites are temporally dynamic, illustrating the importance of conducting a thorough global sensitivity analysis across multiple time points when analyzing mathematical models of gene regulation.
INTRODUCTION Gene regulation in embryonic development
Embryonic development in animals is very precisely controlled by a network of regulatory proteins (Davidson, 2010;Peter and Davidson, 2011).For any particular protein, the exact level of expression at a specific time point can be crucial to the proper development of the organism (Davidson & Levine, 2008).Within each cell of the developing embryo, protein abundance is a function of two key molecular events: transcription and translation.Transcription is the process of reading a gene in a DNA template to produce a messenger RNA (mRNA), while translation is the process of reading the mRNA to produce a protein product.A simple, but long-standing question in Biology is the following: Which is contributing more to the variance in protein levels in cells, transcription or translation?
Work analyzing the importance of transcription in mammalian cells
Many experimental studies have been conducted in an attempt to understand the roles of transcription and translation in regulating the dynamic nature of mRNA and protein concentrations (Maier, Guell & Serrano, 2009;Vogel et al., 2010;Schwanhausser et al., 2011;Beck et al., 2011;Bantscheff et al., 2012;Vogel & Marcotte, 2012;Li, Bickel & Biggin, 2014).One such recent detailed study, conducted by the Biggin lab using data from ubiquitously expressed (housekeeping) genes in cultured mammalian cells, aimed to improve existing quantification of protein abundances through statistical analysis of the impact of experimental error (Li, Bickel & Biggin, 2014).In this study, a two-part regression procedure was used to derive new estimates of protein abundance from the 2011 data set of Schwanhausser et al. (2011).Using these new, corrected measurements of protein abundances, along with previous measurements of mRNA and protein degradation rates, they were able to determine the relative importance of transcription, translation, mRNA degradation, and protein degradation (see Fig. 1A).This analysis is referred to as the measured protein error strategy.The result of this procedure found that for the 4,212 genes considered, transcription contributed the most to protein abundance (∼38%), with translation contributing slightly less (∼30%), followed by mRNA degradation (∼18%) and protein degradation (∼14%) (Li, Bickel & Biggin, 2014).This result is important to note, as it differs drastically from Schwanhausser et al.'s original conclusion that translation accounts for the largest contribution (∼55%) to overall variance in the cellular abundance of proteins (Schwanhausser et al., 2011;Li, Bickel & Biggin, 2014).
Existing mathematical models of gene expression
Due to the quantitative nature of gene regulation in the embryo, and the advent of new experimental techniques giving rise to massive amounts of mRNA and protein concentration data, various mathematical models have been derived and implemented to help understand the complexity that lies within developmental gene regulatory networks.These models range from static models, considering only transcription at a single time point in development in a single cell, to dynamic spatio-temporal models that incorporate transcription, translation, diffusion, and decay rates for a network of genes that regulate one another over a continuous time frame (Jaeger et al., 2004;Santillan & Mackey, 2004;Bintu et al., 2005;Janssens et al., 2006;Zinzen et al., 2006;Segal et al., 2008;Gertz, Siggia & Cohen, 2009;Fakhouri et al., 2010;He et al., 2010;Bieler, Pozzorini & Naef, 2011;Janssens et al., 2013;Ilsley et al., 2013;Dresch et al., 2013;Samee & Sinha, 2014).
To accurately model protein abundance in a metazoan animal, such as a mouse or fruit fly, one must consider both spatial and temporal dynamics in the developmental system.One such model that we developed uses a discretized reaction-diffusion equation to model concentrations of mRNA and protein in a developing Drosophila embryo across n nuclei (Dresch et al., 2013).This model not only incorporates terms for mRNA and protein synthesis and decay, but also diffusion of these molecules in the developing embryo.This is particularly important in Drosophila development as the early stages of embryogenesis are marked by 13 mitotic (nuclear) divisions in the absence of cellular divisions, resulting in a multinucleate syncytial embryo (see Fig. 1B).
In its simplest form, the model can be written as: where a represents the specific mRNA or protein that the equation corresponds to, i represents the specific nucleus, D a and λ a are the corresponding diffusion and decay rates, and Y represents the entire vector of mRNA and protein concentrations within the system being modeled.Many similar models have been used to model the expression dynamics of the gap gene system in the developing Drosophila embryo (Jaeger et al., 2004;Okabe-Oho et al., 2009;Ashyraliyev et al., 2009;Bieler, Pozzorini & Naef, 2011;Holloway et al., 2011;Janssens et al., 2013;Holloway & Spirov, 2015).Although these models all rely on an underlying reaction-diffusion framework, they vary greatly in their implementation.Both deterministic (Jaeger et al., 2004;Ashyraliyev et al., 2009;Bieler, Pozzorini & Naef, 2011;Janssens et al., 2013) and stochastic (Okabe-Oho et al., 2009;Holloway et al., 2011;Holloway & Spirov, 2015) models have been able to accurately predict the effects of particular perturbations to the network.Stochastic models of hunchback regulation have been used to shed light on the underlying factors that reduce noise and promote stability of the hunchback gradient.These include the number or arrangement of BICOID and KRUPPEL binding sites within the regulatory sequences that control transcription of the hunchback gene as well as protein diffusion (Okabe-Oho et al., 2009;Holloway et al., 2011;Holloway & Spirov, 2015).In this study, we focus on the broad impacts of transcription, translation, diffusion, and decay, and do not consider specific transcription factor binding sites within regulatory DNA sequences.Thus, we focus on a deterministic model and do not include any stochasticity.
Global parameter sensitivity analysis and HDMR
To help develop a better understanding of the model parameters, including how their values impact the model output (protein abundance) and how one might interpret that impact with respect to the biological system, parameter sensitivity analysis is needed.Parameter sensitivity analysis refers to a mathematical analysis of the change in model output as a result of variation in the input parameter values (Frey & Patil, 2002;van Riel, 2006;Tang et al., 2006;Dresch et al., 2010;Ay & Arnosti, 2011;Jarrett et al., 2014).This analysis can be done locally, at a particular point in parameter space, or globally across the entire parameter space.
Local parameter sensitivity analyses are typically implemented by simply computing or approximating the partial derivative of the objective function at a particular point in parameter space to determine how the function changes locally with respect to small variations of a particular parameter (van Riel, 2006;Dresch et al., 2010;Reeves & Fraser, 2009).The major advantages to adopting a local approach are that it is straightforward, often easy to interpret, and computationally inexpensive.However, a significant limitation of local methods is that when dealing with a large parameter space focusing on a single point in that space may not be representative of much of the overall parameter space.In contrast, global methods allow one to calculate parameter sensitivities while considering the full range of parameter space (Jarrett et al., 2014).Most global methods also have the ability to calculate higher order sensitivities, which capture interactions between parameters.This can be challenging to do with a local method, especially when parameter space is large and many different parameter combinations within that space lead to valid model outputs.Therefore, in this study we focus on a global method for parameter sensitivity analysis for our model of gene expression in the Drosophila embryo.
Higher Dimensional Model Representation (HDMR) is a robust global method for calculating parameter sensitivities (Ziehn & Tomlin, 2009;Dresch et al., 2010).This method uses the Monte Carlo integration method to decompose the model output, f (x), often referred to as the objective function, into terms of increasing dimensionality with respect to the parameters x 1 ,...,x N : In the above equation, f 0 is the main effect, and is approximated by the overall mean of model output over all parameter sets sampled.Each function f i (x i ) is a first-order term representing the effect of the parameter x i acting independently on the model output.Each function f ij (x i ,x j ) is a second-order term representing the effect of the parameters x i and x j on the model output.These terms represent the impact pairwise parameter interactions have on determining the model output.
This approximation is done on a bounded subset of R N , where N is the number of model parameters.The bounded subset represents the parameter space and in each dimension, corresponds to a realistic range for the given parameter.In some applications, the parameter space is known or experimentally determined using empirically determined measurements (Ziehn & Tomlin, 2009;Tang et al., 2006).In other cases, parameter space is chosen based on model assumptions and simulations (Frey & Patil, 2002;Gutenkunst et al., 2007;Dresch et al., 2010;Ay & Arnosti, 2011;Jarrett et al., 2014).
One of the main assumptions when using HDMR is that the objective function values are normally distributed (Ziehn & Tomlin, 2009).Thus, using the bounded parameter space, one generates a pseudo-random sampling using a Sobol Set (Sobol, 1976).The bounded parameter space must be a hypercube in N-dimensional space, so it can be normalized to the unit hypercube for sampling.Each set of parameter values is then used as input for a model simulation and the corresponding model output is obtained.Once this sampling has been done for all parameter sets sampled, HDMR approximates the mean of the output values, f 0 , as well as higher order terms using orthonormal polynomial approximations.First-and second-order terms are then normalized by the total variance to obtain the main effect of each parameter and the effects of pair-wise parameter interactions, referred to as first-and second-order sensitivity indices respectively.Although this method has the ability to calculate higher-order sensitivities, it has been shown that using only first-and second-order terms is sufficient to approximate the total sensitivity in the system (Li, Rosenthal & Rabitz, 2001;Liang & Guo, 2003;Dresch et al., 2010).
In this study, we utilize a global HDMR analysis to investigate the sensitivity of our Drosophila embryo gene expression model to the individual transcription, translation, diffusion, and decay rate parameters and higher-order interactions between these parameters.In addition, we compare our results to those in mammalian cells and other studies that have attempted to model different gene expression systems (Li, Bickel & Biggin, 2014).
Simplified model and parameters
In this study, we use a simplified version of our earlier model (Dresch et al., 2013), which was used to predict both mRNA and protein concentrations along a one-dimensional strip of nuclei in a developing Drosophila embryo.
The simplifying assumption applied in the current study is that the gene of interest has spatially uniform transcriptional activity.Thus, the transcription rate is held constant.This allows us to utilize simple numerical solvers such as Euler's method or Runga-Kutta methods, and to measure the relative importance of transcription to the model output using a single parameter, σ .Note that diffusion is discretized with respect to space and zero flux boundary conditions are used (Dresch et al., 2013).For 2 ≤ i ≤ n − 1, where Here, y j represents mRNA concentrations for 1 ≤ j ≤ n and protein concentrations for n + 1 ≤ j ≤ 2n.A schematic of the biological processes incorporated in the model is shown in Fig. 1.The model parameters are defined in Table 1.For all of the analysis and results shown in Table 2 and Figs.2-4, 52 nuclear positions are used at approximately 50% egg height (ventral-dorsal) from approximately 10% to 90% egg length (anterior-posterior) and the numerical solver Runga-Kutta 4 is used to approximate the solutions to the system of Ordinary Differential Equations shown above.
Initial conditions
Five different initial conditions are used in this study.Each initial condition corresponds to a different group of spatially expressed genes present in an early Drosophila embryo.The first three initial conditions all correspond to genes that are ubiquitously expressed at spatially uniform levels, such as Zelda in the Drosophila embryo.The only difference between these three initial conditions are the levels of the initial concentrations of mRNA and protein.The initial protein and mRNA concentrations used are 0, 1 2 , and 1.These initial conditions allow us to compare our calculated parameter sensitivities to the contribution to variance in mammalian housekeeping genes measured by Li, Bickel & Biggin (2014).
The other two initial conditions used in the study are representative of genes that are known to be extremely important to early development in Drosopihla embryos, anterior and posteriorly deposited maternal factors, such as Bicoid and Nanos respectively.The initial conditions contain a nonzero mRNA concentration in either the most anterior or posterior nucleus at the initial time point, zero initial mRNA concentrations at all other spatial locations, and zero initial protein concentrations in all nuclei.
Exploring parameter space
Before a global parameter sensitivity analysis can be performed, one must first define parameter space.This is done by finding a range for each model parameter that results in 'realistic' model predictions.This will result in a six-dimensional hypercube, as required for the Sobol Set sampling method (Sobol, 1976).For our model, we determine this range for each parameter by exploring six-dimensional parameter space and recording the parameter value combinations resulting in model predictions of reasonable protein concentrations.
Note: Lower bounds were chosen using the values estimated in Dresch et al. (2013).
2. Choose a single parameter.Test each of the boundaries of the parameter range by holding the parameter of interest constant at the lower (or upper) bound and varying all other parameters within their current ranges.Then, run all simulations for t ∈ [0,10] with all 5 different initial conditions.If >5% of all simulations result in saturated or undetectable protein concentrations, then modify the boundary of the parameter range by increasing or decreasing it by a number between 0.001 and 0.1.Note: Lower bounds on parameter ranges were never allowed to go below 0.0, as negative parameter values would violate the model assumptions.
3. Repeat step 2 for all remaining parameters.
4. When all parameter ranges have been tested, let all 6 parameters vary within their current ranges.If > 5% of all simulations result in saturated or undetectable protein concentrations, then go back to step 2. If not, then stop.
The realistic parameter ranges we obtained and used for our subsequent sensitivity analyses are listed in Table 1.
Objective function
To calculate model parameter sensitivies, we utilize a previously developed HDMR MATLAB script (Ziehn & Tomlin, 2009).Due to the spatial and temporal dynamics of our model, we perform this analysis using a variety of different objective functions.We focus our analysis on a twenty minute time window in an early blastoderm embryo.Within this time window, we perform our sensitivity analysis at eleven distinct, evenly distributed time points.This removes any bias in the time point chosen and allows us to look at the behavior of parameter sensitivities over time.
One should note that a parameter's first-order sensitivity is calculated by approximating the variance with respect to that parameter divided by the total variance of the objective function.Thus, at t = 0, this results in a ratio of 0 0 since all parameter sets will result in an output equal to the initial condition.Thus, we define the first-order sensitivity of each parameter to be zero when the model output used is from the time point t = 0.
Spatially, the model aproximates mRNA and protein concentrations at 52 nuclei across the anterio-posterior axis of the embryo.When performing our sensitivity analysis, we focus on protein concentrations only and consider three different spatial locations: a nucleus in the anterior of the embryo, in the posterior of the embryo, or in the center of the embryo.We also perform sensitivity analyses using the mean protein concentration over all 52 nuclei.
To avoid any inherent bias in our results, we compute parameter sensitivities using each of the five initial conditions and each combination of spatial and temporal protein concentrations.Thus, the results shown are represenatitive of 120 runs of the HDMR algorithm.
COMPARING MODEL SENSITIVITIES TO EXPERIMENTAL DATA
We begin our analysis by comparing the sensitivities for all six model parameters to previously defined contributions of these processes (Li, Bickel & Biggin, 2014).For all inititial conditions, we identify a time point in development such that our model parameter sensitivities are qualitatively similarly to the calculated contribution to protein abundance from Li, Bickel & Biggin (2014) (Fig. 2).However, we do observe minimal quantitative differences.When modeling a ubiquitous gene with initial concentrations of one and calculating sensitivities using model output at t = 4, we find that the first-order sensitivities differ by a maximum of ±0.09, with a Pearson correlation coefficient of >0.96 between the first-order sensitivities and the calculated contribution to variance in protein levels from Li, Bickel & Biggin (2014).When modeling an anterior maternally deposited gene and calculating sensitivities using model output at t = 2, we find that the first-order sensitivities differ by a maximum of ±0.15, with a Pearson correlation coefficient of >0.89 between the first-order sensitivities and the calculated contribution to variance in protein levels from Li, Bickel & Biggin (2014).These small observed differences could be due to a number of biological reasons, including noise in the experimental data, species to species variation, or variation in the time point in development in which different genes are expressed.The sensitivities in Fig. 2 are similar to those obtained for all other runs of the model tested (i.e., posterior maternally deposited and ubiquitous genes with other initial concentration values).
To determine whether or not parameter interactions play a significant role in the model's behavior, in addition to computing first-order sensitivites, we also consider second-order and total sensitivities (Fig. 2).Here, note that second-order sensitivities account for a very small portion (<12%) of the total sensitivity for each parameter.Including second-order sensitivities allows us to account for 99% of total model sensitivity.Therefore, as has been done in past implementations of HDMR, the sum of firstand second-order sensitivities is shown as an approximation to the total sensitivity of each parameter (Li, Rosenthal & Rabitz, 2001;Liang & Guo, 2003;Dresch et al., 2010).However, one should note that first-order sensitivities alone account for over 85% of total sensitivity.This indicates that a great deal of information regarding the contribution of each parameter to the overall behavior of the gene expression system can be gleaned from the first-order sensitivities, which are in strong agreement with the experimental data from Li, Bickel & Biggin (2014) (Fig. 2).
DYNAMIC PARAMETER SENSITIVITIES
Due to the dynamic nature of Drosophila development, we also analyze parameter sensitivities at multiple different time points.Figure 3 contains first-order sensitivities corresponding to model simulations with the same initial conditions as those used in the analyses of a ubiquitously expressed and an anteriorly-deposited mRNA in Fig. 2, but calculated using model output at a later time point (t = 10 in both cases).One should note that qualitatively, the parameter sensitivities have changed drastically.The sensitivity of λ, the parameter representing mRNA decay, has increased significantly in both cases (from 0.17 to 0.31 for a ubiquitous gene and 0.03 to 0.28 for an anterior deposited gene) while the sensitivities of both σ and τ , parameters representing transcription and translation, have decreased.This stark difference from the sensitivities shown in Fig. 2 suggests that one should further investigate the dynamic nature of the parameter sensitivites with respect to this model.
To better understand how parameter sensitivities are changing over the twenty minute time interval in which we run the model, we choose eleven different evenly distributed time points (t = 0,2,4,6,8,10,12,14,16,18,20).For each set of initial conditions, the HDMR algorithm is implemented using protein concentrations from each of the eleven time points to calculate the parameter sensitivities.The results corresponding to model simulations with the same initial conditions as those used for the analysis in Figs.2-3 are shown in Fig. 4 for each model parameter as a function of the time point used for the HDMR analysis.
When considering the dynamics of parameter sensitivities, the general trend observed in all results obtained using nonzero initial conditions is that the model is most sensitive to changes in σ and τ , parameters representing transcription and translation, at earlier time points, and becomes more sensitive to λ, the parameter representing mRNA decay, at later time points (Fig. 4).
In Fig. 4A, the model begins with nonzero mRNA and protein concentrations across all nuclei.Thus, σ (transcription) is important, but the model is more sensitive to τ (translation) initially as translation is increasing the protein concentrations at a rate proportional to the nonzero mRNA concentration (Fig. 4).One should also note that the mRNA concentration is increasing at a rate equal to σ (transcription) and decreasing at a rate proportional to the mRNA concentration, causing the model to become more sensitive to λ (mRNA decay) and σ (transcription) at later time points (t > 5) (Fig. 4).
In Fig. 4B, we observe a slightly different trend in the parameter sensitivities during early time points due to the fact that initial mRNA concentrations are zero in all nuclei, except the single most anterior nucleus and protein concentrations are zero in all nuclei.Due to the initial mRNA and protein concentrations of zero in the middle nucleus, and the fact that protein can only be synthesized if the mRNA concentration is greater than zero, the model is more sensitive to σ (transcription) than τ (translation) at early times points.One should note that although the model predictions are quite distinct for the different initial conditions, both predictions show mRNA and protein concentrations in the middle nucleus eventually approaching equilibrium values, and the model exhibits similar relative parameter sensitivities at later time points (Fig. 4).Regardless of the qualitative similarities, it is important to note that the dynamic parameter sensitivities are dependent on the initial conditions of the system (Fig. 4).
In both cases shown here, the higher model sensitivity with respect to σ (transcription) than τ (translation) at mid to late time points is in agreement with the conclusion of Li et al. that transcription explains the largest percentage of variance in true protein levels (Li, Bickel & Biggin, 2014).However, the large contribution from mRNA decay at late time points was not found in the Li et al. study, as they were unable to consider any dynamic protein levels.
DISCUSSION
To develop a deeper understanding of the mechanisms involved in regulating the levels of protein concentrations during early development in metazoans, one must consider not only the experimental data that has been carefully collected in the lab, but also the power of mathematical models and the biological interpretation of the parameter values that they use (Dresch et al., 2010;Ay & Arnosti, 2011).A few very important aspects of modeling that one must consider are how well the model can simulate the reality of the overall system, how well the model agrees with the parts of the system that are already defined biologically, and whether the model parameters can be interpreted in terms of the real biological phenomena that they are assumed to represent.The last of these points is one of the most important, yet remains absent from many modeling studies.In this study, we have used parameter sensitivity analysis to try to unravel the importance of parameter values in a reaction-diffusion model and in doing so to better understand the power of this modeling approach as well as the relative contribution of transcription and translation in regulating protein abundance.
The relative contribution of transcription, translation, and decay rates in overall protein abundance can be aproximated using experimental data (Schwanhausser et al., 2011;Li, Bickel & Biggin, 2014); however, we ask the question of whether these relative contributions match those found using a mathematical model of gene regulation.We find that the relative parameter sensitivities are in close agreement with the contributions found experimentally (Li, Bickel & Biggin, 2014) for various initial conditions at particular time points (Fig. 1).This leads us to believe that even this simple reaction-diffusion model is capturing the correct overall dynamics for a gene with spatially uniform transcriptional activity.
A number of recent studies have directly addressed the dynamics of gene expression in the model system of the developing Drosophila embryo through quantitative imaging approaches.In Drosophila, the transcriptional activation of the hunchback gene by the BICOID protein in the anterior half of the pre-cellular embryo is itself relatively static from nuclear cycle 10 until mid-nuclear cycle 14 (approximately 70 min) in regions where there is a high concentration of BICOID (Garcia et al., 2013;Lucas et al., 2013).However, at the posterior limit of hunchback expression, where the BICOID concentration is lower, there is stochastic on/off transcription, suggesting a threshold level of BICOID is required to initiate transcriptional activation.In contrast, the regulation of even-skipped transcription from the stripe 2 enhancer is known to be very dynamic (Small, Blair & Levine, 1992).Although initiated in a broad expression domain in nuclear cycle 11 and 12, transcription becomes increasingly refined in nuclear cycle 13 and 14 to produce a single mature stripe only 2 or 3 nuclei wide.Live imaging studies recently confirmed the dynamic nature of expression directed by the stripe 2 enhancer and demonstrated that the mature stripe is also surprisingly transient (as transcription is lost within 30 min, by the end of nuclear cycle 14), with individual nuclei exhibiting discontinuous bursts of transcription (Bothma et al., 2014).These results emphasize the need to carefully consider the importance of the dynamic spatial and temporal characteristics of gene expression in the networks that regulate embryonic patterning.
Figure 1
Figure 1 Schematic of the biological procceses represented in the ODE model.In (A) the Reaction terms of the model are illustrated.These include the synthesis of new mRNA through transcription, the synthesis of new protein through translation of mRNA, mRNA decay, and protein decay.In B, the Diffusion terms of the model are illustrated.These include both mRNA diffusion and protein diffusion to/from neighboring nuclei in an early Drosophila embryo.
Figure 2
Figure 2 Qualitative similarities between parameter sensitivities and experimental measurements.(A) Ubiquitous gene with initial concentrations of 1.0; First and second-order sensitivities at the middle nucleus at t = 4 min.(B) Anterior maternally deposited gene; First and second-order sensitivities at the middle nucleus at t = 2 min.In both, along the x-axis are the parameters corresponding to: 1. Transcription, 2. mRNA Diffusion, 3. mRNA Decay, 4. Translation, 5. Protein Diffusion, and 6.Protein Decay.
Figure 3
Figure 3 Comparison of parameter sensitivities to experimental measurements at a later time point.(A) Ubiquitous gene with initial concentrations of 1.0; First and second-order sensitivities at the middle nucleus at t = 10 min.(B) Anterior maternally deposited gene; First and second-order sensitivities at the middle nucleus at t = 10 min.In both, along the x-axis are the parameters corresponding to: 1. Transcription, 2. mRNA Diffusion, 3. mRNA Decay, 4. Translation, 5. Protein Diffusion, and 6.Protein Decay.
Figure 4
Figure 4 Temporal dynamics of parameter sensitivities.(A) First-order parameter sensitivities at the middle nucleus over time for a ubiquitous gene with initial concentrations of 1.0.(B) First-order parameter sensitivities at the middle nucleus over time for an anterior maternally deposited gene.
Table 1
This table describes all model parameters and the ranges used during the sensitivity analysis.
Table 2
This table contains the first-order sensitivities at the middle nucleus at t = 4 min for a ubiquitous gene with initial concentrations of 1.0.
n corresponds to the number of nuclei being modeled, the model equations are thus as follows: | 6,617.8 | 0001-01-01T00:00:00.000 | [
"Biology"
] |
Comparative Efficacy of Pharmacological Therapies for Low Back Pain: A Bayesian Network Analysis
Low back pain (LBP) is a common problem, but the efficacy of pharmacological therapies remains controversial. Therefore, we aimed to comprehensively evaluate and quantitatively rank various pharmacological therapies for patients with low back pain. Two meta-analyses were performed: an initial pair-wise meta-analysis, followed by network meta-analysis using a random-effects Bayesian model. We included randomized controlled trials comparing placebos, non-steroidal anti-inflammatory drugs, opioids, skeletal muscular relaxants, pregabalin (or gabapentin), and some drug combinations. The primary and secondary outcomes were pain intensity and physical function. Eighty-eight eligible trials with 21,377 patients were included. Here, we show that only skeletal muscle relaxants significantly decreased the pain intensity of acute (including subacute) low back pain. Several kinds of drugs significantly decreased the pain of chronic low back pain, but only opioids and cyclo-oxygenase 2-selective non-steroidal anti-inflammatory drugs effectively reduced pain and improved function. Pregabalin (or gabapentin) seemed to be an effective treatment to relieve pain, but it should be used with caution for low back pain.
INTRODUCTION
Low back pain (LBP), with an estimated mean point prevalence of 18.3%, is one of the greatest challenges to worldwide health. (Hoy et al., 2012). Most people experience LBP, and it is the dominant cause of years lived with disability. (Global Burden of Disease Study, 2015). Systematic pharmacological therapy is one of the most important and basic choices to control LBP in many major international clinical guidelines (such as the American College of Physicians (ACP), (Qaseem et al., 2017), National Institute of Health and Care Excellence (NICE), (de Campos, 2017), and Evidence-Informed Primary Care Management of Low Back Pain (Canada) (Group. TOPTLBPW, 2015) guidelines). Various drugs are commonly used in LBP pharmacotherapy, including opioids, non-steroidal anti-inflammatory drugs (NSAIDs), skeletal muscle relaxants, antidepressants, corticosteroids, antiepileptics (pregabalin or gabapentin), and combination medications (with more than two active ingredients). .
Previous pair-wise meta-analyses have assessed a few of the commonly prescribed medications, and the results showed that skeletal muscle relaxants (Chou et al., 2017a) and opioids (Chaparro et al., 2013) were effective against acute (include subacute) and chronic LBP, respectively, while NSAIDs were effective for both acute (include subacute) and chronic LBP. (Roelofs et al., 2008;Enthoven et al., 2016). Nevertheless, these studies were insufficient by only considering the direct comparative evidence between two medications. Moreover, most of the studies only compared the active interventions to placebo. Therefore, multiple comparisons between various medications are still lacking. Additionally, limited by statistical methodological shortcomings, these pair-wise meta-analyses could not quantitatively rank the efficacies of numerous medications and objectively recommend the most suitable treatments for patients with LBP.
Recently, the use of network meta-analysis has gradually increased in evidence-based medicine studies and has been proven to have outstanding advantages for assessing intricate treatment efficacy in osteoarthritis, (da Costa et al., 2017), myocardial infarction, (Jinatongthai et al., 2017), diabetes, (Palmer et al., 2016), and other areas. Network meta-analysis (NMA) can synthesize all of the direct and indirect comparison evidence into one statistical framework and then comprehensively evaluate and rank the exact quantitative efficacy of the numerous treatments. (Lu and Ades, 2004). Therefore, in this study, we aimed to perform a Bayesian random-effects network meta-analysis to comprehensively evaluate and quantitatively grade the effects of various pharmacological therapies for patients with LBP.
In our analysis, the main assumptions are that the reduce of LBP and the promote of function come from medical treatements. And the curative effect of all drugs is independent of the age and gender.
Search Results
A total of 11,239 studies were identified, and 88 eligible trials with 21,377 participants were finally included in the network metaanalysis. As presented in Supplementary Table S1, 81 eligible trials (22, 43, and 16 studies of acute, chronic, and radicular LBP, respectively) reported data on pain intensity and 47 eligible trials (12, 27, and eight studies of acute, chronic, and radicular LBP, respectively) reported data on function. The numbers of participants with acute, chronic, and radicular LBP were 7,229, 11,912, and 2,236, respectively. Details of the electronic search and selection flow diagram are shown in Figure 1. Figure 2 shows the network plot of eligible comparisons of potency. We assessed the effects of the following single pharmacological treatments for pain relief and functional evaluations: NSAIDs, opioids, corticosteroids, skeletal muscular relaxants, cyclo-oxygenase 2 (COX2)-selective NSAIDs, γaminobutyric acid (GABA) mimetic antiepileptics (gabapentin or pregabalin), other antiepileptics, tramadol, tricyclic antidepressants (TCA), selective serotonin reuptake inhibitors (SSRIs), duloxetine, buprenorphine, tanezumab, acetaminophen, anticholinergics (diphenhydramine or benztropine), diazepam and tapentadol. Further, the effects of some co-treatments were assessed.
Pair-Wise Meta-analysis
The results of the pairwise meta-analysis revealed that skeletal muscle relaxants and NSAIDs were effective for acute LBP, both for pain and function. For radicular LBP, only duloxetine was better than the placebo. For chronic LBP, COX2-selective NSAIDs, opioids, duloxetine, tanezumab and a drug combination (tramadol plus acetaminophen) showed superior efficacy to the placebo. The detailed results of the pair-wise metaanalysis are shown in Supplementary Tables S2-7.
Network Results of Acute LBP
As shown in Figure 3, the results of the network meta-analysis demonstrated that only the skeletal muscle relaxant was efficient and the most precise (SMD = 0.58 [95% CI, 0.20 to 0.97]). Although NSAIDs plus skeletal muscle relaxants had the highest ranking (SUCRA = 0.77), the pooled result of this intervention was imprecise (SMD = 0.68 [95% CI, −0.01 to 1.34]), and the comparison between NSAIDs plus skeletal muscle relaxants and a single skeletal muscle relaxant did not show superior potency (SMD = 0.10 [95% CI, −0.69 to 0.68]). However, the pooled effects indicated that none of the medications included in this study were effective for improving the condition of disability. The detailed network results for acute LBP are shown in Supplementary Tables S20, 21.
Network Results of Radicular LBP Figure 4 shows that only the combination of NSAIDs and GABA mimetic antiepileptics showed better effects than the placebo in pain management of patients with radicular LBP (SMD = 0.90 [95% CI, 0.32 to 1.50]). Moreover, this combination had a better precision than the other drugs that had a similar SUCRA. Antiepileptics (like topiramate) had the highest ranking, but the pooled result of this intervention was imprecise (SMD = 1.15 [95% CI, −0.18 to 2.48]). For the management of function, none of the medications included in this group proved to be superior to the placebo. The detailed network results for radicular LBP are shown in Supplementary Tables S24, 25. Figure 5 shows the effects of different medications on pain and function outcomes compared with the placebo. Most medications evaluated for pain management of LBP were effective, but acetaminophen, GABA mimetic antiepileptics (pregabalin or gabapentin), anticholinergics, SSRIs and buprenorphine were ineffective. Notably, the recommendation ranking of acetaminophen was significantly lower than that of the placebo (SUCRA = 0.03). In the network analysis, the combination of COX2selective NSAIDs and GABA mimetic antiepileptics seemed to have the best statistical efficiency in pain management of patients with LBP (SUCRA = 0.98); but its precision was
Inconsistency, Risk of Bias, Sensitivity Analyses, and Publication Bias
As illustrated in Figure 2, the global inconsistency assessment did not show any statistically significant difference, and local inconsistency evaluations further confirmed its consistency (p > 0.05, Supplementary Tables S8-13). The loop of acute function was formed only by one multi-arm trial, which also did not affect its consistency according to the standard definition (Higgins et al., 2012). The node-splitting inconsistency, which reported the estimated direct and indirect treatment effects and their differences, is illustrated in Supplementary Tables S14-19
The Quality of the Estimates of Treatment Effect
We rated the quality of direct, indirect, and NMA (combined direct + indirect) evidence following the GRADE approach (Puhan et al., 2014;Brignardello-Petersen et al., 2018) in our analyses ( Figure 6). There was no evidence of high quality when we analysed the functional improvement of patients with acute and radicular LBP, and most evidence was of moderate quality. However, for the pain relief of patients suffering from chronic LBP, the drug combinations (COX2-selective NSAIDs + pregabalin or gabapentin) were significantly superior to NSAIDs, and NSAIDs showed better effects than placebo. These two comparisons both showed high quality. Another combination (NSAIDs + GABA mimetic antiepileptic) also showed superiority to the placebo with high quality.
DISCUSSION
This network meta-analysis is the first comprehensive study to synthesize all of the available evidence and to precisely and quantitatively compare the efficacy of numerous pharmacological treatments for patients with LBP. Skeletal muscle relaxants are more effective than the placebo in relieving pain caused by acute LBP, consistent with previous studies. (Chou et al., 2017a). In addition to skeletal muscle relaxants, the previous meta-analyses (Roelofs et al., 2008;Chou et al., 2017a) and some guidelines (ACP and NICE) (de Campos, 2017;Qaseem et al., 2017) have recommended NSAIDs as an effective medication for the pain intensity of acute LBP. However, our results revealed that NSAIDs (including COX2-selective) were not superior to the placebo.
The reasons for this paradoxical finding may be as follows. First, we compared NSAIDs with other interventions, not just the placebo. In the network comparison, the included data revealed that NSAIDs were not more effective (in the emergency department) than acetaminophen, which seemed ineffective in our results and those of other previous high-quality trials. (Eken et al., 2014;Williams et al., 2014). Second, in this network metaanalysis, we categorized patients with sciatica into the radicular LBP group (previous studies did not). (Chou et al., 2017a). Similar to our results, a series of trials with another outcome (pain relief) showed that NSAIDs were noneffective. (Goldie, 1968;Weber and Aasand, 1980;Basmajian, 1989). Considering the controversial effects of NSAIDs and the potential risk of side effects (gastrointestinal, liver, and cardio-renal toxicities), we do not recommend NSAIDs as applicable medications to treat acute LBP with urgent pain.
In the analysis of the pain intensity of radicular LBP, previous systematic reviews found that none of the single medications was Frontiers in Pharmacology | www.frontiersin.org February 2022 | Volume 13 | Article 811962 6 effective for radicular LBP. (Chou et al., 2017a). However, our results suggest that combination pharmacotherapy with NSAIDs plus GABA mimetic antiepileptics (pregabalin or gabapentin) exhibited remarkable effects that were considerably more effective than single NSAIDs (recommended to treat nonspecific-LBP by ACP, NICE, and Canadian guidelines) (Qaseem et al., 2017;de Campos, 2017;Group. TOPTLBPW, 2015) or GABA mimetic antiepileptics (gabapentin or pregabalin, recommended to treat neuropathic-pain by NICE guidelines). (de Campos, 2017). This combination pharmacotherapy could be used to reduce prostaglandin-mediated pain and neuropathic pain simultaneously.
However, we have to acknowledge that the limitations of this finding might affect its validity. One of the trials included in this comparison was performed using a crossover-design, which would result in a carry-over effect. (Woods et al., 1989). We have to consider that pregabalin has been reported to be ineffective for sciatica and is associated with significant harms. (Mathieson et al., 2017). Whether the combination of NSAIDs and pregabalin (or gabapentin) is safe and effective for radicular LBP needs more research for confirmation. The current evidence shows that in addition to the fact that exercise has little effect, other non-invasive treatments (including pharmacotherapies and physiotherapies) are noneffective. (Chou et al., 2017b;Qaseem et al., 2017).
For the pain intensity of chronic LBP, previous metaanalyses found that strong opioid agents (morphine, oxymorphone, and others) were effective. (Chaparro et al., 2013). Although we reached the same conclusion from our NMA, we do not recommend opioids as applicable medications to treat chronic LBP in consideration of their overdose and addiction risks. In addition, a previous study found that opioids were associated with a high risk of nausea, dizziness, constipation, and vomiting. (Chaparro et al., 2013;Els et al., 2017). Of note, all of the major guidelines (ACP, NICE, and Canadian guidelines) (Qaseem et al., 2017;de Campos, 2017;Group. TOPTLBPW, 2015) recommend the consideration of central analgesics only when NSAIDs are contraindicated. Moreover, the recommendation ranking opioids was just third.
Some guidelines recommend the use of tramadol and tramadol-acetaminophen in managing the pain intensity of chronic LBP. (de Campos, 2017;Qaseem et al., 2017). We also found that tramadol-acetaminophen combination drugs were significantly effective. However, tramadol is also addictive, although this risk is less than for strong opioid agents. (Bravo et al., 2017). Compared with strong opioids and tramadolacetaminophen, COX2-selective NSAIDs plus pregabalin (or gabapentin) seemed to be a better choice. However, the comparison of this combination with placebo was of low quality because of the risk of bias and its indirectness. Similar to the findings for radicular LBP, caution should be applied when using pregabalin for chronic LBP.
The ineffectiveness of SSRIs for LBP have long been known, but duloxetine is currently commonly used to control chronic or neuropathic pain. (de Campos, 2017). Recently, a growing number of studies have supported the view that duloxetine is an effective medication for reducing the pain intensity of chronic LBP. (Chou et al., 2017a). The results of this network metaanalysis show the superior analgesic effect of duloxetine, but not that of TCA or SSRIs.
Additionally, some new pharmacological therapies for pain intensity might be available in the near future. (Abbasi, 2017;Knezevic et al., 2017). According to the current eligible data, we found that nerve growth factors (such as tanezumab) were superior to the placebo. However, the safety and tolerability of these drugs are still under evaluation. Considering its safety, NSAIDs (including COX2-seletive NSAIDs) may be applicable and safe medications for chronic LBP, although the SUCRA ranking of these medicines was not high.
Unfortunately, because data on functional improvement were lacking in many of the eligible trials, we were unable to assess the effects of all of the prescribed medications on functional improvement, as detailed as the pain intensity. For function in acute or radicular LBP, none of the pharmacotherapies showed better effects than the placebo. However, a previous study revealed that functional improvement was associated with reduced pain intensity (because the perceived intensity of pain would increase distress and fear). (Lee et al., 2015). We suggest that more high-quality randomized controlled trials should be designed to evaluate this synergistic effect. For the functional improvement of patients with chronic LBP, a previous meta-analysis found that NSAIDs had none to slight effects based on the RMDQ score. (Enthoven et al., 2016). Notably, we found that only COX2-selective NSAIDs and strong opioids were more effective than the placebo for functional improvement. The opioids had a higher SUCRA ranking and higher quality of evidence than COX2-selective NSAIDs. We still hope that COX2-selective NSAIDs could be preferentially used for functional improvement of patients with chronic LBP, following the ACP, NICE, and Canadian guidelines. (Qaseem et al., 2017;de Campos, 2017;Group. TOPTLBPW, 2015).
Our study has some limitations. First, although we used a comprehensive search strategy and strict criteria, a few old trials that were only reported as abstracts were not included. Second, we classified LBP into three groups: acute, chronic, and radicular, but we did not perform a more detailed analysis of each group according to treatment duration and dosage. Further, we believe that the absence of the above analyses limits the clinical applicability of our studies to some extent. Finally, our findings were based on unadjusted estimations, and the various characteristics of the participants (age, sex, ethnicity, and geographical location) might have influenced the synthesized effect size. Furthermore, we found a few direct comparisons which were probably publication biased. We must admit that our analysis is fully based on unadjusted estimations, and this is an assignable limitation. However, we could not find appropriate methods for covariates adjustment in our analysis. Matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC) are not generalizable to larger treatment networks (David, 2020). Meta-regresstion has been derived for only the Frontiers in Pharmacology | www.frontiersin.org February 2022 | Volume 13 | Article 811962 simple case of binary outcomes and binary covariates so far (David, 2020).
In conclusion, this network meta-analysis is the first to provide comprehensive and quantitative evidence of pharmacological therapies for LBP. For acute LBP, skeletal muscle relaxants decreased the pain intensity with moderate quality of the estimate of effect. For radicular and chronic LBP, a combination of NSAIDs (including COX2-selective NSAIDs) and pregabalin (or gabapentin) seemed to be the best non-invasive treatment to relieve pain. In fact, many previous trials reported that pregabalin or gabapentin were not effective or safe for treating LBP as a single drug (Mathieson et al., 2017;Cairns et al., 2019). Pregabalin or gabapentin were also reported to be addictive and at the risk of misuse (Atkinson et al., 2016;Smith et al., 2016), although these drugs are effective for neuropathic pain (Finnerup et al., 2015). Research about combinations of NSAIDs (including COX2-selective NSAIDs) and pregabalin (or gabapentin) are still rare. We are looking forward to more high-quality trials and studies to be performed, either with this combination or other new pharmacological therapies for LBP.
Search Strategy and Selection Criteria
We searched PubMed, embase, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL) and Clinical Trial databases for relevant randomized clinical trials (RCTs) published before 30 December, 2019. The details of the search strategy are shown in Figure 1 and Supplementary Material S1.
Studies were selected according to the following criteria: 1) participants were >18 years old and diagnosed with LBP (radicular or non-radicular); 2) the causes of the LBP were not cancer, infection, high-velocity trauma, fractures, pregnancy, or severe neurologic deficits; 3) medications were compared with placebo or another medication; and 4) the reported drug administration route was oral or intravenous. Moreover, we did not apply any language restrictions. The exclusion criteria were trials only published as abstracts or LBP complicated with neck pain.
Data Extraction
Two investigators independently reviewed all eligible studies and then extracted the relevant data using a predefined data extraction sheet. We extracted authors, trial design and size, detailed characteristics of participants (age, sex, geographical location, duration of pain symptoms, and duration of follow up), intervention, and outcome data. The time point we extracted data was the last one in trials that had multiple time points. The number of patients in each trial was extracted as the number of subjects who completed the trial. If the data were not reported, we extracted the number of initial subjects. We only extracted data reported in the articles, and if the data were graphically presented, we collected them from the related graphical information. (Enthoven et al., 2016). Any disagreements were resolved by team discussions, and the final decision was reached based on a majority vote.
Outcomes and Study Design
The primary outcome of this study was the mean pain intensity. If more than one pain condition was provided in a single trial, we extracted the data in the following order of hierarchy: average pain intensity > pain on rest > pain on walking > pain on sleep. In addition, our secondary outcome was functional improvement. Quantitative evaluations of function using the Roland Morris disability questionnaire (RMDQ) or the Oswestry disability index (ODI) were included in the analysis.
Quality Assessment of the Evidence
The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group has developed a sensible and transparent approach to grading the quality of evidence. (Guyatt et al., 2011a;Balshem et al., 2011;Guyatt et al., 2011b;Guyatt et al., 2011c;Guyatt et al., 2011d;Guyatt et al., 2011e;Guyatt et al., 2011f). In the GRADE approach, the evidence is evaluated based on five domains: study limitations (risk of bias), inconsistency and heterogeneity, indirectness, imprecision, and publication bias. The risk of bias was assessed by using the Cochrane risk of bias tool (Higgins, 2011) for each study. Contributions of the included studies to direct and indirect evidence were used to assess the risk of bias of the NMA. Indirectness was identified as surrogate outcomes, study populations or interventions that differed from those of interest (Guyatt et al., 2011e) or intransitivity (Jansen et al., 2011). Imprecision was confirmed if the 95% confidence intervals were wide. When we rated the quality of the evidence, we followed four steps to assess the quality of treatment effect estimates from the NMA: 1. present direct and indirect treatment estimates for each comparison of the evidence network; 2. rate the quality of each direct and indirect effect estimate; 3. present the NMA estimate for each comparison of the evidence network; 4. rate the quality of each NMA effect estimate. For a particular comparison, if both direct and indirect evidence were available, we chose the higher of the two quality ratings as the quality rating for the NMA estimate. (Puhan et al., 2014).
Statistical Methods
We performed two types of meta-analyses in this study. First, we performed a standard pair-wise meta-analysis using a random-effects model. The heterogeneity and inconsistency in these analyses were assessed with I 2 , τ 2 and p-value. (Higgins, 2011). Further, we conducted a network metaanalysis using a random Bayesian model. Three Markov chains were used in this Bayesian model, and we recorded each trace plot to ensure that the pooled results were Frontiers in Pharmacology | www.frontiersin.org February 2022 | Volume 13 | Article 811962 8 convergent. Details of the random Bayesian model are shown in Supplementary Material S2. All of the statistical analyses were performed using WINBUGS, R, and STATA.
The changes in pain intensity and function were considered as continuous outcomes, and the pooled effect size was calculated as standardized mean difference (SMD). Each mean pooled effect size was reported as the corresponding 95% confidence intervals (CI). (Salanti et al., 2008). Further, the possible rank of each intervention was evaluated using surface under the cumulative ranking (SUCRA) probabilities, and higher values indicated a more efficient intervention. (Salanti et al., 2011).
To ensure the transitivity assumption held, we assessed potential inconsistencies between direct and indirect evidence using their specific methods. The global inconsistency was assessed using a design-by-treatment interaction model, which used the χ (Global Burden of Disease Study, 2015) test to confirm the plausibility of assumptive consistency throughout the network analysis. (Higgins et al., 2012). Further, we assessed the local inconsistency by calculating the difference between direct and indirect estimates of the closed loops in every network using the loop-specific approach. We also constructed a node-splitting model, which separates the direct and indirect evidence to evaluate the inconsistency. (Dias et al., 2010). Additionally, to assess the possible publication bias throughout the network, comparison-adjusted funnel plots were constructed. (Salanti et al., 2011). Moreover, we performed sensitivity analyses for pain intensity by omitting the low-quality trials to verify that our pooled results were stable.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
JJ, HP, HC, LS, SF, and XL contributed to the conception and design of the study. JJ, HP, YW, and PC performed the analysis and interpretation of data. XL and SF contributed to draft manuscript. All approved the submitted manuscript. | 5,314.4 | 2022-02-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Hyaluronic Acid–HDAC3–miRNA Network in Allergic Inflammation
We previously reported the anti-allergic effect of high molecular weight form of hyaluronic acid (HMW-HA). In doing so, HA targets CD44 and inhibits FcεRI signaling and cross-talk between epidermal growth factor receptor (EGFR) and FcεRI. We previously reported the role of histone deacetylases (HDACs) in allergic inflammation and allergic inflammation-promoted enhanced tumorigenic potential. We reported regulatory role of HA in the expression of HDAC3. In this review, we will discuss molecular mechanisms associated with anti-allergic effect of HA in relation with HDACs. The role of microRNAs (miRNAs) in allergic inflammation has been reported. We will also discuss the role of miRNAs in allergic inflammation in relation with HA-mediated anti-allergic effects.
The Role of Hyaluronic Acid in Allergic Inflammation
Hyaluronic acid (HA), a major component of the extracellular matrix (ECM), plays a key role in regulating inflammation. HA enhances proteoglycan synthesis, reduces the production and activity of pro-inflammatory mediators and matrix metalloproteinases, and alters the behavior of immune cells (1). Inflammation is associated with accumulation and turnover of HA polymers by multiple cell types. Increased accumulation of HA has been demonstrated in joint tissue of rheumatoid arthritis (RA) patients (2); in lung disease, both in humans (3) and animal experimental models (4); in inflammatory liver disease; during vascular disease (5); in rejected kidney transplants (6) as well renal tissue of patients experiencing diabetic nephropathy (7); in the intestine of patients undergoing flares of inflammatory bowel disease (IBD) (8).
Circulating HA might be a marker of asthma control, as it correlates with airway resistance and has good sensitivity in the detection of impaired asthma control (9). The increased level of HA is correlated with asthma (10). In addition, HA appears to provide the scaffolding for inflammatory cell accumulation as well as for new collagen synthesis and deposition (10). HA deposition appears largely due to up-regulation of hyaluronan synthase 1 (HAS1) and hyaluronan synthase 2 (HAS2). HAS2 mRNA is markedly increased in asthmatic fibroblasts (11). In cases of inflammation, HA contains a variety of HA polymers with overlapping lengths and functions. HA exists as both a pro-and anti-inflammatory molecule in vivo, and these contradictory functions depend upon polymer length. High molecular weight form of hyaluronic acid (HMW-HA) elicits protective antiinflammatory effects that protect lung epithelial cells from apoptosis and is protective against liver injury, acting to reduce pro-inflammatory cytokines in a T-cell-mediated injury model (12). HMW-HA inhibits macrophage proliferation and cytokine release, leading to decreased inflammation in the early wound of a preclinical post laminectomy rat model (13). HMW-HA exerts a negative effect on the activation of mitogen-activated protein kinase (MAPK) by allergic inflammation (14). HA with an average molecular mass <500 kDa can be considered a fragment. HA fragments with an average molecular weight of 200 kDa have been shown to stimulate chemokines, cytokines, growth factors, proteases, and by macrophages (15)(16)(17)(18)(19)(20). Organic contact sensitizers induce production of reactive oxygen species (ROS) and a concomitant breakdown of HA to pro-inflammatory low molecular weight fragments in the skin (21). Importantly, inhibition of either ROS-mediated or enzymatic HA breakdown prevents sensitization as well as elicitation of Chediak-Higashi Syndrome (CHS) (21). Mucus hyper secretion with elevated MUC5B mucin production is a pathologic feature in many airway diseases associated with oxidative stress (22). ROS-induced MUC5AC expression in normal human bronchial epithelial cells (NHBE) is dependent on HA depolymerization and epidermal growth factor receptor (EGFR)/MAPK activation (22). Although most of the work on low molecular weight HA (LMW-HA) fragments initially illustrated a pro-inflammatory response, a number of studies have shown that HA fragments can also be protective. In a murine model of colitis, intraperitoneal injection of HA <750 kDa protects colonic epithelium in a Toll-like receptor (TLR) 4-dependent manner (23). This functional difference between HAs of varying sizes is a matter of controversy since many studies have reported opposing results in regard to which type of HA can bring about cellular changes (24). These contradictory functions of HA, depending on the polymer length, may result from differential effects of these HA on HA receptors such as CD44 and receptor for HA-mediated motility (RHAMM). Exogenous HAs used in many studies are not homogenous with respect to size. Therefore, it is difficult to conclude that size alone determines the function of HAs of various sizes. These discrepancies may also be due to differences in experimental settings, purity of HA (25), and the possibility of diverse responses to HA depending on the cell type. Although many reports suggest anti-allergic effect of exogenous HA, the effect of endogenous HMW-HA on the allergic inflammation needs further investigation.
Hyaluronic acid levels are elevated in allergic animals and the increase correlates with the influx of inflammatory cells. This increase in HA levels is largely due to up-regulation of hyaluronidase-1 (HYAL-1) and hyaluronidase-2 (HYAL-2) (26). HYAL-1, -2, and -3 are expressed in airway epithelium and may operate in a coordinated fashion to depolymerize HA during allergen-induced asthmatic responses associated with up-regulation of tumor necrosis factor-alpha (TNF-alpha) and interleukin-1 beta (IL-1beta) (27). Degradation of HA by HYAL-1 primarily depends upon CD44 or other HA receptors to internalize HA fragments. Patients deficient in HYAL-1 have been reported with plasma HA levels at 40 times normal (28). The finding of HYAL-2 in complex with CD44 at the plasma membrane suggests that HA-binding proteins may enhance the activity of HA degrading enzymes, and CD44 binding may provide HYAL-2 with a preferable conformation of HA. IL-1beta exerts inflammatory activity via CD44 by the mediation of HA fragments derived from HA depolymerization (29).
CD44, a receptor for HA, expressed on CD4(+) T cells plays a critical role in the accumulation of antigen-specific Th2 cells, but not Th1 cells, in the airway and in the development of airway hyper-responsiveness (AHR) induced by antigen challenge (30). Airway fibroblasts from patients with asthma produced significantly increased concentrations of LMW-HA compared with those of normal fibroblasts (30). CD44, but not CD62L, is required for leukocyte extravasations during a Th2-type inflammatory response such as allergic dermatitis (31). HMW-HA inhibits interaction between IgE and FcεRI and between FcεRI and protein kinase C δ (PKCδ) during allergic inflammation (14). A role for CD44 in the regulation of allergic inflammation in vivo has been shown by studies in which anti-CD44 treatment inhibited the development of optimal contact allergic responses (32). CD44 has been shown to be responsible for the development of pulmonary eosinophilia (33). CD44-hyaluronan interaction is necessary for allergic asthma (34). The serum-derived hyaluronanassociated protein (SHAP)-HA complex has an inhibitory role in the development of airway hyper responsiveness and allergic airway inflammation which may be attributed, at least in part, to negative feedback mechanisms exerted by SHAP (35). It will be necessary to examine effects of HAs of various sizes on the expression and/or activity of CD44.
The Role of HDAC3 in Allergic Inflammation
Histone acetylation/deacetylation plays an important role in the regulation of inflammatory genes associated with allergic inflammation (36). Histone deacetylase-3 (HDAC3)-deficient macrophages are unable to activate almost half of the inflammatory gene expression program when stimulated with lipopolysaccharide (LPS) (37). Pulmonary inflammation is ameliorated in mice lacking HDAC3 in macrophages (38). The induction of cyclooxygenase (COX)-2, which occurs during allergic inflammation, is accompanied by degradation of HDAC1 (39). HDAC2 expression and activity are decreased in asthmatic subjects, smokers, and smoking asthmatic subjects (40). HDAC3, induced by antigen stimulation, interacts with FcεRI and is necessary for allergic inflammation both in vitro and in vivo (41). DNA methyl transferase I (DNMT1) acts as a negative regulator of allergic inflammation and the down-regulation of DNMT1 induces the expression of HDAC3 (42). HDAC3 is necessary for the induction of TNF-α, a cytokine increased during allergic inflammation, in cardiomyocytes during LPS stimulation (43). HDAC3 mediates allergic inflammation by regulating the expression of monocyte chemoattractant protein-1 (MCP1) (41). HMW-HA, but not LMW-HASs, decreases the expression of HDAC3 in human vascular endothelial cells to promote angiogenesis which is accompanied by allergic inflammation (44). reports suggest role of HDACs in the expression regulation of miRNAs (47)(48)(49)(50). miRNA let-7a regulates the expression of IL-13, a cytokine necessary for allergic lung disease (51). The downregulation of miR-145 inhibits Th2 cytokine production and AHR (52). HA-CD44 interaction enhances the expression of miR-10b (53). miR-199a-3p and miR-34a miR-590-3p target CD44 (54,55). Polymorphisms of CD44 3 ′ UTR weaken the binding of miRNAs (55), suggesting that miRNAs regulate the expression of CD44. Given the fact that CD44 is involved in allergic inflammation, miRNAs may regulate HA-mediated anti-allergic inflammation.
The Regulation of HA Metabolism by miRNAs and HDAC3
In silico screening of expression data with predicted miR-23 target sites combined with in vivo testing, predicts HAS2 as novel direct target of miR-23 (56). miR-23a-3p in non-senescent fibroblasts leads to the decreased HAS2-mediated HA synthesis (57). This implies that miR-23 may regulate the production of HA during allergic inflammation. Based on our previous report (44), HA-CD44 may decrease the expression of HDAC3 ( Figure 1B). Promoter analysis shows that HAS1 and HAS2 contain the binding sites for YY1, STAT6, NF-kB, and HDAC2 ( Figure 1C), suggesting that the production of HA is under epigenetic regulation. Because HDAC3 shows an inverse relationship with HDAC2 (41), HDAC3 may regulate the expression of HASs to mediate allergic inflammation. Many reports suggest that HASs may increase the production of HMW-HA to exert anti-allergic effects ( Figure 1B). Thus, the decreased expression of HDAC3 by HA-CD44 interaction may increase the expression of HAS1 and HAS2 to exert anti-allergic effect ( Figure 1B). HDAC3, increased during allergic inflammation, may regulate the expression of HYALs and HASs differentially to increase the production of LMW-HA. This may result in allergic inflammation ( Figure 1B).
Promoter analysis shows that HYAL-1, -2, and -3 contain binding sites for various transcriptional regulators including HDAC2 (Figure 1C), suggesting the role of HDAC3 in the expression regulation of HYALs. TargetScan analysis predicts the binding of miRNAs, such as miR-24,-28, -134, and -370, to the 3 ′ -UTR sequences of HYAL-1 ( Figure 1C). TargetScan analysis predicts the binding of various miRNAs to the 3 ′ -UTR sequences of HYAL-2 and HYAL-3 ( Figure 1C). These miRNAs may prevent the production of HA fragments by negatively regulating the expression of these HYALs. Thus, these miRNAs may mediate allergic inflammation. TargetScan analysis predicts the binding of miR-212,-132, and -590 to the 3 ′ -UTR of HDAC3 ( Figure 1C). These miRNAs may exert anti-allergic effects by decreasing the expression of HDAC3. Taken together, miRNAs and HDAC3 may regulate allergic inflammation through their effects on HA metabolism.
Concluding Remarks and Perspectives
In this study, we show the possible involvement of miRNAs and HDAC3 in the regulation of HA metabolism. HA-HDAC3 -miRNA network described in this review may offer valuable mechanism for HA-mediated anti-allergic effects. For better understanding of HA-mediated anti-allergic effect, it will be necessary to identify downstream targets of HA. The downstream targets of HA would be valuable for the development of anti-allergic drugs. Identification of more miRNAs that regulate allergic inflammation in relation to HA and HDAC3 will be necessary for better understanding of HA-mediated anti-allergic inflammation. | 2,574.6 | 2015-04-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
A Quasistatic Electro-Viscoelastic Contact Problem with Adhesion
The aim of this paper is to study the process of contact with adhesion between a piezoelectric body and an obstacle, the so-called foundation. The material’s behavior is assumed to be electro-viscoelastic; the process is quasistatic, the contact is modeled by the Signorini condition. The adhesion process is modeled by a bonding field on the contact surface. We derive a variational formulation for the problem and then we prove the existence of a unique weak solution to the model. The proof is based on a general result on evolution equations with maximal monotone operators and fixed-point arguments.
Introduction
A piezoelectric body is one that produces an electric charge when a mechanical stress is applied (the body is squeezed or stretched). Conversely, a mechanical deformation (the body shrinks or expands) is produced when an electric field is applied. This kind of materials appears usually in the industry as switches in radiotronics, electroacoustics, or measuring equipments. Piezoelectric materials for which the mechanical properties are elastic are also called electro-elastic materials, and those for which the mechanical properties are viscoelastic are also called electro-viscoelastic materials. Different models have been developed to describe the interaction between the electrical and mechanical fields( see, e.g., [2,14,[16][17][18][19][29][30][31] and the references therein). General models for elastic materials with piezoelectric effect, called electro-elastic materials, can be found in [2,4,14]. A static frictional contact problem for electricelastic materials was considered in [1,15], under the assumption that the foundation is insulated. Contact problems involving elasto-piezoelectric materials [1,15,28], viscoelastic piezoelectric materials [5,25] have been studied.
Adhesion may take place between parts of the contacting surfaces. It may be intentional, when surfaces are bonded with glue, or unintentional, as a seizure between very clean surfaces. The adhesive contact is modeled by a bonding field on the contact surface, denoted in this paper by β; it describes the pointwise fractional density of active bonds on the contact surface, and sometimes referred to as the intensity of adhesion. Following [10,11], the bonding field satisfies the restrictions 0 ≤ β ≤ 1; when β = 1 at a point of the contact surface, the adhesion is complete and all the bonds are active; when β = 0 all the bonds are inactive, severed, and there is no adhesion; when 0 < β < 1 the adhesion is partial and only a fraction β of the bonds is active. Basic modeling can be found in [10][11][12]. Analysis of models for adhesive contact can be found in [7,8] and in the monographs [24,27]. An application of the theory of adhesive contact in the medical field of prosthetic limbs was considered in [22,23]; there, the importance of the bonding between the bone-implant and the tissue was outlined, since debonding may lead to decrease in the persons ability to use the artificial limb or joint.
In this work we continue in this line of research, where we extend the result established in [3,20] for contact problem described with the Signorini conditions into contact problem described with the Signorini conditions with adhesion where the obstacle is a perfect insulator and the resistance to tangential motion is generated by the glue, in comparison to which the frictional traction can be neglected. Therefore, the tangential contact traction depends only on the bonding field and the tangential displacement.
The paper is structured as follows. In Sect. 2 we present the electro-viscoelastic contact model with adhesion and provide comments on the contact boundary conditions. In Sect. 3 we list the assumptions on the data and derive the variational formulation. In Sect. 4, we present our main existence and uniqueness result, Theorem 4.1, which states the unique weak solvability of the Signorini adhesive contact problem. The proof of the theorem is provided in Sect. 5, where it is carried out in several steps and is based on a general result on evolution equations with maximal monotone operators and fixed-point theorem.
The Model
We consider a body made of a piezoelectric material which occupies the domain ⊂ R d (d = 2, 3) with a smooth boundary ∂ = and a unit outward normal ν. The body is acted upon by body forces of density f 0 and has volume free electric charges of density q 0 . It is also constrained mechanically and electrically on the boundary. To describe these constraints we assume a partition of into three open disjoint parts 1 , 2 , and 3 , on the one hand, and a partition of 1 ∪ 2 into two open parts a and b , on the other hand. We assume that meas 1 > 0 and meas a > 0; these conditions allow the use of coercivity arguments in the proof of the unique solvability of the model. The body is clamped on 1 , and therefore, the displacement field vanishes there. Surface tractions of density f 2 act on 2 . We also assume that the electrical potential vanishes on a and a surface electrical charge of density q 2 is prescribed on b . On 3 the body is in adhesive contact with an insulator obstacle, the so-called foundation. The contact is frictionless and, since the foundation is assumed to be rigid, we model it with the Signorini condition. We are interested in the deformation of the body on the time interval [0 T ]. The process is assumed to be quasistatic, i.e., the inertial effects in the equation of motion are neglected. We denote by x ∈ ∪ and t ∈ [0 T ] the spatial and the time variable, respectively, and, to simplify the notation, we do not indicate in what follows the dependence of various functions on x and t. Here and everywhere in this paper, i, j, k, l = 1, . . . , d, summation over two repeated indices is implied, and the index that follows a comma represents the partial derivative with respect to the corresponding component of x. The dot above variable represents the time derivatives.
We denote by S d the space of second-order symmetric tensors on R d (d = 2, 3) and by ". , . the inner product and the norm on S d and R d , respectively, that is We also use the usual notation for the normal components and the tangential parts of vectors and tensors, respectively, given by υ ν = υ · ν, υ τ = υ − υ ν ν, σ ν = σ i j ν i ν j , and σ τ = σ ν − σ ν ν.
With these assumptions, the classical model for the process is the following.
2)
Div (2.14) We now provide some comments on equations and conditions (2.1)-(2.14). First, equations (2.1) and (2.2) represent the electro-viscoelastic constitutive law in which σ = (σ i j ) is the stress tensor, ε(u) = (ε i j (u)) denotes the linearized strain tensor, E(ϕ) = −∇ϕ is the electric field, A and F are the viscosity and elasticity operators, respectively, E = (e i jk ) represents the third-order piezoelectric tensor, E * = (e * i jk ), where e * i jk = e ki j , is its transpose, B =(B i j ) denotes the electric permittivity tensor, and D = (D 1 , . . . , D d ) is the electric displacement vector. Details on the constitutive equations of the form (2.1) and (2.2) can be found, for instance, in [1,2,13,21] and the references therein.
Next, Eqs. (2.3) and (2.4) are the equilibrium equations for the stress and electric displacement fields, respectively, in which Div" and "div " denote the divergence operators for tensor and vector valued functions, respectively.
Conditions (2.5) and (2.6) are the displacement and traction boundary conditions, whereas (2.10) and (2.11) represent the electric boundary conditions. Note that we need to impose assumption (2.12) for physical reasons. Indeed, this condition models the case when the obstacle is a perfect insulator and was used in [1,9,15,25,26]. The evolution of the bonding field is governed by the differential Eq. (2.9) with given positive parameters γ ν and ε a where r + = max{0, r }.
Condition (2.7) represents the Signorini contact condition with adhesion, where u ν is the normal displacement, σ ν represents the normal stress, γ ν denotes a given adhesion coefficient, and R ν is the truncation operator defined by Here L > 0 is the characteristic length of the bond, beyond which it does not offer any additional traction (see [27]). We assume that the resistance to tangential motion is generated only by the glue, and is assumed to depend on the adhesion field and on the tangential displacement, but, again, only up to the bond length L (see (2.8)), where the truncation operator R τ is defined by Then, p τ (β) acts as the stiffness or spring constant, increasing with (β), and the traction is in the direction opposite to the displacement. The maximal modulus of the tangential traction is p τ (1)L.
Finally, (2.13) and (2.14) represent the initial conditions in which u 0 and β 0 are the prescribed initial displacement and bonding fields, respectively.
Variational Formulation and Preliminaries
In this section, we list the assumptions on the data and derive a variational formulation for the contact problem. To this end we need to introduce some notation and preliminaries.
Everywhere below, we use the classical notation for L p and Sobolev spaces associated to and . Moreover, we use the notation L 2 ( ) d , H 1 ( ) d , H, and H 1 for the following spaces The spaces L 2 ( ) d , H 1 ( ) d , H, and H 1 are real Hilbert spaces endowed with the canonical inner products given by and the associated norms · L 2 ( ) d , · H 1 ( ) d , · H , and · H 1 , respectively. Here and below we use the notation For every element υ ∈ H 1 ( ) d we also write υ for the trace of υ on and we denote by υ ν and υ τ the normal and tangential components of υ on .
We now list the assumptions on the problem's data. The viscosity operator A and the elasticity operator F are assumed to satisfy the conditions The piezoelectric tensor E and the electric permittivity tensor B satisfy (3.4) As in [8] we assume that the tangential contact function satisfies The forces, tractions, volume, and surface-free charge densities satisfy The adhesion coefficient γ ν and the limit bound ε a satisfy the conditions Also, we assume that the initial bonding field satisfies Moreover, the tensor E and its transpose E * satisfy the equality Let now consider the closed subspace of H 1 ( ) d defined by Since meas ( 1 ) > 0 and the viscosity tensor satisfies assumption (3.1), it follows that V is a real Hilbert space endowed with the inner product (3.11) and let · V be the associated norm. We also introduce the following spaces Since meas ( a ) > 0 it is well known that W is a real Hilbert space endowed with the inner product (ϕ, ψ) W = (∇ϕ, ∇ψ) L 2 ( ) d , and the associated norm · W . Also we have the following Friedrichs-Poincaré inequality where c F > 0 is a constant which depends only on and a . The space W is a real Hilbert space endowed with the inner product and the associated norm · W . Moreover, by the Sobolev trace theorem, there exist two positive constants c 0 andc 0 such that for all υ ∈ V, ψ ∈ W , and t ∈ [0, T ]. We note that the definitions of f and q are based on the Riesz representation theorem. Moreover, it follows from assumptions (3.6) and (3.7) that For Signorini problem, we use the convex subset of admissible displacements fields given by U ad = {υ ∈ V / υ ν ≤ 0 on 3 } , and we make the regularity assumption on the initial data. Also, we introduce the set a.e. on 3 }.
Let X be a real Hilbert space with the inner product (·, ·) X and the associated norm · X , and let A : The operator A : (ii) maximal monotone if A is monotone and there is no monotone operator B : X −→ 2 X such that Gr(A) is a proper subset of Gr(B), which is equivalent to the following implication For a function φ : X −→] − ∞, +∞] we use the notation D(φ) and ∂φ for the effective domain and the subdifferential of φ, i.e., Finally, let φ K : X →] − ∞, +∞] denote the indicator function of the set K , i.e., It can be shown that the subdifferential of the indicator function ∂φ K : X −→ 2 X of a closed convex K of the space X is a maximal monotone operator. We can also show that the sum of a maximal monotone operator and a single-valued monotone Lipschitz continuous operator is a maximal monotone operator.
Finally, we use the usual notation for the Lebesgue spaces L p (0, T ; X ) and Sobolev spaces W k, p (0, T ; X ) where 1 ≤ p ≤ ∞ and k ∈ N. We will need the following result for existence and uniqueness proofs.
Theorem 4.2 Let X be a real Hilbert space and let A : D(A)
⊂ X −→ 2 X be a multivalued operator such that the operator A + ωI X is maximal monotone for some real ω. Then, for every f ∈ W 1,1 (0, T ; X ) and u 0 ∈ D(A), there exists a unique function u ∈ W 1,∞ (0, T ; X ) which satisfieṡ A proof of Theorem 4.2 may be found in ( [6], p. 32). Here and below I X is the identity map on X .
Proof of Theorem 4.1
We assume in the following that the conditions of Theorem 4.1 hold and below we denote by c a generic positive constant which is independent of time and whose value may change from place to place.
By the Riesz representation theorem we can define the following operators G : W −→ W and R : V −→ W , respectively, by We can show that G is a linearly continuous symmetric positive definite operator. Therefore, G is an invertible operator on W . We can also prove that R is a linear continuous operator on V . Let R * the adjoint of R. Thus, from (3.10) we can write where we obtain for all t ∈ [0, T ]. On the other hand, G is invertible where the previous equality gives us Now, using (5.3), (5.5), and (3.22) we obtain Let η ∈ W 1,∞ (0, T ; V ) be given. In the first step we prove the following existence and uniqueness result for the displacement field.
Lemma 5.1 There exists a unique function u
Proof Let now the operator L : V → V defined by Using the properties of the operators G, R, and R * we deduce that L is a continuous linear operator on V . Thus we have By the Riesz representation theorem we can define an operator G : Now, taking into account (3.1), (3.2), (3.11) and (5.11) it follows that is, G is a Lipschitz continuous operator. Moreover, the operator is a monotone Lipschitz continuous operator on V .
Let the function f : [0 T ] −→ V given by Keeping in mind that η ∈ W 1,∞ (0, T ; V ), using (3.16), (3.17) and the fact that R * G −1 is linearly continuous, it follows from (5.13) that Let φ U ad : V → − ∞, +∞ denote the indicator function of the set U ad and let ∂φ U ad be the subdifferential of φ U ad . Since U ad is a nonempty, convex, closed part of V , it follows that ∂φ U ad is a maximal monotone operator on V and D(∂φ U ad ) = U ad . Moreover, the sum is a maximal monotone operator. Thus, conditions (3.18) and (5.14 ) allow us to apply Theorem 4.
Since for any elements u, g ∈ V , the following equivalence holds the differential inclusion (5.15) is equivalent to the following variational inequality We use now (5.17), (5.11), (3.11) to see that u η satisfies the following inequality It follows now from (5.18), (5.13), (5.9), and (5.16) that u η satisfies (5.7) and (5.8), which concludes the proof of Lemma 5.1.
In the second step we use the displacement field u η obtained in Lemma 5.1 to obtain the following existence and uniqueness result for the electric potential field.
In the third step, we use again the displacement field u η obtained in Lemma 5.1 and we consider the following initial value problem.
We obtain the following result. | 4,018 | 2015-09-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
ILT 15-A Computer Program for Evaluation of Accelerated Leach Test Data of LLW in the Hungarian NPP Paks
Computer Program ILT15 developed to accompany a new leach test for solidified radioactive waste forms in the Hungarian NPP Paks. The program is designed to be used as a tool for performing the calculations necessary to analyse leach test data, a modelling program to determine if diffusion is the operating leaching mechanism (and, if not, to indicate other possible mechanisms), and a means to make extrapolations using the diffusion models. The ILT15 program contains four mathematical models that can be used to represent the data, diffusion through a semi-infinite medium, diffusion through a finite cylinder, diffusion plus partitioning of the source term and solubility limited leaching. The program is written in C++ in the Borland C++ Builder programming environment. A detailed description of application of this modelling computer program is given.
Introduction
For a number of years increasing attention has been given in Hungary to the management of the low and medium level radioactive wastes (LLW, MLW) being produced in Paks nuclear power plant.Some of these wastes, for example, evaporator bottom concentrates, pond sludge and spent ion exchange media are produced in relatively large volumes.In addition to national programs on the development of immobilization processes, the European Community commissioned programs on the immobilization of LLW and MLW.These wastes are immobilized by incorporating them into cement.In order to optimize these immobilization processes, for example with respect to waste loading, it was necessary to characterize the products with respect to such properties as density, strength, dimensional stability, leach resistance and so on.In this article we report about an accelerated leach test and the developed computer program.
Computer Program ILT15 was developed to accompany a new leach test for solidified radioactive waste forms.
The program is designed to be used as a tool for performing the calculations necessary to analyse leach test data, a modelling program to determine if diffusion is the operating leaching mechanism (and, if not, to indicate other possible mechanisms), and a means to make extrapolations using the diffusion models.The ILT15 program contains four mathematical models that can be used to represent the data.
The mathematical models describing leaching mechanisms are as below: 1. Diffusion through a semi-infinite medium (for low fractional releases), 2. Diffusion through a finite cylinder (for high fractional releases), 3. Diffusion plus partitioning of the source term, 4. Solubility limited leaching.
The program uses simple mathematical models described in the ASTM C1308-08 [1] standard and the fundamentals can be seen below in the next topic.Unlike the ANS-16.1 and EPA 1315 methods, the ASTM 1308 method calculates a diffusion coefficient based on a cumulative release rather than an incremental release.It effectively calculates an average over the duration of the test and it was chosen for our investigation.
Leaching by diffusion
Mass transfer via diffusion is described by the Fick laws.In simplest case the diffusion is not depends on time and described by the Fick 1 st law: Φ m mass flow density (mass flux), mass of material diffused through a unit of area during unit of time kg/(m 2 *s) This process is ideal, supposing timeless inflow of diffusing material and constant concentrations in time at various distances.
In case of changing concentrations in space and time the Fick 2 nd law describes the phenomena: where: In general form:
Basics of the test and requirements for the test components
The base of the leach test is a semi dynamic method, when a cylindrical specimen is immersed in a leach solution (water or aqueous solution), then usually in time the specimen is exchanged with a new one end the leached concentration or mass is determined.This compared to the original total concentration or mass results the Incremental Fraction Leached (IFL).Summing the IFL values till a given leach time we get the Cumulative Fraction Leached (CFL) values.More frequent exchange of specimens during the test results more exact modelling with the Fick 2 nd law, but the leached amount of material will be lower and the determination uncertainty will be larger.Because of the above restrictions the leaching time intervals are optimized.For that reason the leach test should be completed under standard conditions, including the specimen and leach solution characteristics as well as the leach vessel material and auxiliary conditions (specimen fixation, mixing, filtering etc.).
Requirements for the leaching liquor
• the leach solution will not react with the material of the specimen and will not modify it • the leach solution should not contain such a component, which modifies the leaching mechanism
Requirements for the leaching vessel
• the wall of the vessel could not react with the solution and leached components • the exchange of the solution should be easy and the solution in the vessel should not evaporate
Requirements for the auxiliary components
• their materials should not react with the solvent and with the leached materials • the filtered leached materials could be analysed • the filter will remove particles with diameter > 45 μm • the hanging the specimen should not influence the leaching and should not cover more than 1 % of the surface
Requirements for the specimen
• the specimen is a cylindrical body with a diameter/ height ratio 1/1 and their value is 2.5 cm • the specimen composition should be identical with the waste composition • the distribution of the radioactive isotope(s) or heavy metal material should be uniform in specimen • the structure of the specimen material should be the identical at the surface and in the bulk • every specimen's geometry, mass and embedded radioactive or heavy metal content should be accurately determined
Other requirements
• the temperature during leaching should be constant with a maximum fluctuation less then: 1°C • surface to volume ratio for specimen should be constant during leach test(s) and the ratio should be: where: volume of leaching liquid cm 3 Regularly at determined term in the leaching liquid should be changed the amount of the leached material or activity should be determined.These intervals should be from the start of leaching 2h, 7h, 24h, 48h, till the end of the 11 th day.
By using the determined leached amounts of material (s) or activities one can calculate the Incremental/ Cumulative Fraction Leached IFL/CFL): where: Calculation starts with the diffusion in semi-infinite specimen using some early leaching data pairs (CFL < 0.2) determining the diffusion coefficient, and continues the calculations with diffusion in finite cylindrical specimen.Diffusion in semi-infinite specimen method uses the following equation: If the CFL > 0.2 calculation continues using the diffusion in finite cylindrical specimen method, where the cumulative fraction leached (CFL) is calculated using a double series expression, with the series nd S c defined in the standard.
Calculations for CFL values in the program are based using equations developed by Pescatore [2,3].
Diffusion plus Partition leaching model
In the partition model a fraction of the contaminant is considered to be immobile and not leachable.This model uses the model for diffusion from a finite cylinder (or a semi-infinite cylinder) if the CFL leached is less than 0.0124, but alters the result by reducing the original source term so that the cumulative fraction leached is determined according to the following equation: where P is the source term partitioning factor between 0 and 1.
Solubility-Limited leaching model
This model accounts for the leaching system in which diffusion is affected by the limited solubility of a radioactive isotope or heavy metal.The model is based on the concept that the leached incremental fractions will be the same at the end of each 1-day sampling interval in case of solubility-limited leaching.This is a nondiffusion controlled leaching.
Using the program in the NPP 4.1 Running the program
When the program begins to run after the installation the following main menu will appear on the screen waiting for the input data from the keyboard or from a "csv" data file.
We entered the measured leach data from experimental results (e.g., leach time and counts per minute-cpm orconcentration).Additionally we input the height and diameter of the solid cylindrical specimen, volume and surface and material of lecher, and the number of summa count of radioactivity or concentration.Alternatively we input the leaching data from an earlier saved "csv" data file.
As an example we input the Cs-137 leach test data using the leach test data measured from a c400 cement cylinder with the embedded evaporator bottom residue of the tank 02TW10B002 of the NPP (Fig. 1).
After completing data input (and/or editing) we selected the fitting model form the "Calculation" menu.(Fig. 2) First we chose the "Diffusion Leaching Model" and clicking on "Calculation" button the following windows appeared (Fig. 3).
The resulted window contains the measured and calculated (fitted) CFL values for each leach time as well as the determined diffusion coefficient in cm 2 /sec and in cm 2 / day unit and the relative error of fitting in percent.A fit is usable if the relative error is less than 0.5 %.By clicking on the "View Fit Data" button the following diagram appeared (Fig. 4) After saving the data and diagram we used the other two fitting models too to find the most accurate fitting model.
The graphical fitting results of the "Diffusion Plus Partition Leaching Model" and the "Solubility Limited Leaching Model" are shown in Figs.5-6.
Results
According to the results of the three model fitting, the most accurate model for the description of leaching Cs-137 from c400 cement matrix in case of embedded evaporator bottom residue (tank 02TW10B002) is the Diffusion Plus Partition Leaching Model.Table 1 contains the relative errors in percent and the determined diffusion coefficients.
radioactive isotope or heavy metal j index of the leach time interval a leached in the actual-time interval activity (concentration) for the actual-isotope or heavy metal Bq A i,0 activity or concentration of the actualradioactive isotope or heavy metal in the specimen before the start of leaching Bq Using the IFL/CFL values the D e effective diffusion coefficient could be determined.Accuracy of fitting of the leach data could be characterized by the following equation:
Fig. 1
Fig. 1 Input leach test data of cemented evaporator bottom residue of tank 02TW10B002
Fig. 2
Fig. 2 Choosing the diffusion leaching model for fit
Fig. 6
Fig. 6 Graphical result of fit Solubility Limited Leaching Model
Table 1
Model fitting results for Cs-137 leaching from c400 cement | 2,419.2 | 2018-12-20T00:00:00.000 | [
"Materials Science"
] |
Effects of side chain isomerism on the physical and photovoltaic properties of indacenodithieno[3,2-b]thiophene–quinoxaline copolymers: toward a side chain design for enhanced photovoltaic performance
Four new D–A polymers PIDTT-Q-p, PIDTT-Q-m, PIDTT-QF-p and PIDTT-QF-m, using indacenodithieno [3,2-b]thiophene (IDTT) as an electron-rich unit and quinoxaline (Q) as an electron-deficient unit, were synthesized via a Pd-catalyzed Stille polymerization. The side chains on the pendant phenyl rings of IDTT were varied from the parato the meta-position, and the effect of the inclusion of fluorine on the quinoxaline unit was simultaneously investigated. The influence on the optical and electrochemical properties, film topography and photovoltaic properties of the four copolymers were thoroughly examined via a range of techniques. The inductively electron-withdrawing properties of the fluorine atoms result in a decrease of the highest occupied molecular orbital (HOMO) energy levels. The effect of meta-substitution on the PIDTT-Q-m polymer leads to good solubility and in turn higher molecular weight. More importantly, it exhibits optimal morphological properties in the PIDTT-Q-m/PC71BM blends. As a result, the corresponding solar cells (ITO/PEDOT:PSS/polymer:PC71BM/LiF/Al) attain the best power conversion efficiency (PCE) of 6.8%. The structure–property correlations demonstrate that the meta-alkyl-phenyl substituted IDTT unit is a promising building block for efficient organic photovoltaic materials. This result also extends our strategy with regards to side chain isomerism of IDTT-based copolymers for enhanced photovoltaic performance.
Introduction
As a novel solar energy harvesting medium, polymer solar cells (PSCs) have been intensively investigated in recent years as they offer the potential to be light weight, exible and manufactured on a large-area scale at low cost. [1][2][3][4] So far, bulk-heterojunction polymer solar cells (BHJ PSCs), using a solution-processed active layer composed of an electron-donor and an electronacceptor, sandwiched between ITO and metallic electrodes, can attain promising power conversion efficiencies (PCEs) of 8-9%. 5 Conjugated donor-acceptor polymers, combining an electrondonating (D) and an electron-withdrawing (A) moiety, are particularly promising now, since judicious selection of D and A moieties can tailor the D-A interaction and p-electron delocalization, to achieve tunable band gaps, energy levels and charge transporting properties for ideal electron-donor materials. [6][7][8] In principle, an effective strategy is to combine a weak D and a strong A unit alternately. The weak D moiety can anchor a lowlying HOMO level, whilst, the strong A moiety can provide a favorable LUMO level and a suitable band gap. 9 In addition, further optimization of a given D-A framework via improvements in solubility, molecular weight and structural orientation can be achieved through side chain modulation. [10][11][12][13][14] To date, a variety of electron-rich arenes comprising of multiply fused aromatic systems have been reported. 15 Rigid backbones with horizontally or vertically extended p-conjugation can be formed by fastening or fusing adjacent building blocks to aromatic cores. One appealing electron-donating unit is the ladder-type indacenodithiophene (IDT) unit. [16][17][18][19][20][21] This structure of aromatic rings has enforced planarization which can easily suppress interannular rotation and enhance p-electron delocalization. The high degree of planarity is conducive to intermolecular charge-carrier hopping and intermolecular interactions between conjugated backbones, which thus results in high charge-carrier mobility. 22 In an effort to further extend the linear p-conjugation of the IDT unit, the central phenyl ring was covalently bonded to two thieno[3,2-b]thiophenes (TT) to form the indacenodithieno [3,2-b]thiophene (IDTT) arene, which enhances the conjugation of the system through increased planarization on changing the pentacyclic rings to heptacyclic rings. Recently, an IDTT-based polymer with a HOMO level of À5.3 eV and a medium band-gap (E g ) of 1.8 eV was reported by Jen et al., which can attain a PCE of 7%. 23 By copolymerizing IDTT with different acceptor units, tunable optoelectronic properties and charge-carrier mobilities were obtained with PCEs around 4-5%. 24,25 Several architectural designs of devices incorporating non-halogenated solvents and interfacial engineering have resulted in high PCE of 7%. 26,27 However, to expand the family of ladder-type arenes, most efforts have been focused on the designs of building blocks for polymer backbones, [28][29][30][31][32][33] whilst only a few reports have shed light on the inuence of side chain isomerization. [34][35][36] Recently, we studied two series of thiophene-quinoxaline 37-39 and IDTquinoxaline 40 copolymers and found that using meta-alkylphenyl instead of para-alkyl-phenyl side chains can enhance the solubility, molecular weight and photovoltaic performances of the copolymers. The corresponding uorinated IDT-based polymer offered a remarkably high V oc of 0.96 V with a PCE of 6.6%. Meanwhile, the non-uorinated analogue attained a PCE as high as 7.8%. The planarity and packing distance between the polymer backbones can be varied due to the steric hindrance between adjacent side chains, and then the solubility, molecular weights and packing properties can be changed on a macro level. For the structural optimization of conjugated materials, the proper anchoring of the side groups seems very useful for improving the device efficiency of PSCs. Therefore, here we attempt to extend this side-chain design strategy to the IDTT unit, which so far has been only used in IDT and quinoxaline-based copolymers.
To this end, we have designed and synthesized four copolymers based on the IDTT donor units, and quinoxaline acceptor units (Scheme 2). This is the rst direct comparison and evaluation of photovoltaic performances among the IDTT copolymers incorporating different side groups, each of which contains para-or meta-side chains on pendent phenyl rings. In addition, both non-uorinated (Q) and uorinated quinoxaline (QF) acceptor units were chosen for comparison, since the uorinated acceptor was expected to feature a lower-lying HOMO level due to the electron-withdrawing property of uorine atoms. [41][42][43][44][45][46][47] All polymers were prepared via the Stille coupling reaction of bis(trimethylstannyl)-substituted IDTT and dibromo-substituted quinoxaline monomers. The solubility, UV-Vis absorption and electrochemical properties were systematically investigated to understand the structure-property correlations of side chain isomerism and the inclusion of uorine. BHJ PSCs using PC 71 BM as the electron acceptor were fabricated for evaluating polymer performance. As anticipated, the devices based on uorinated copolymers featured higher V oc of close to 1.0 V. Compared to the para-substituted analogue PIDTT-Q-p, the meta-substituted polymer PIDTT-Q-m:PC 71 BM device attains a superior PCE of 6.8%, which is among the highest PCEs of IDTT copolymers recorded for the conversional BHJ PSC conguration. This nding agrees well with our previous study, where polymers with meta-substituted side chains show superior photovoltaic performances compared to para-substituted analogues. Characterizations of the photoresponse and lm morphology indicate that there is a direct correlation between lm morphology and device performance for these four copolymers.
Experimental section
Characterization 1 H NMR (400 MHz) and 13 C NMR (100 MHz) spectra were acquired using a Varian Inova 400 MHz NMR spectrometer. Tetramethylsilane was used as an internal reference with deuterated chloroform as the solvent. Size exclusion chromatography (SEC) was performed on an Agilent PL-GPC 220 Integrated High Temperature GPC/SEC System with refractive index and viscometer detectors. The columns are 3 PLgel 10 mm MIXED-B LS 300 Â 7.5 mm columns. The eluent was 1,2,4-trichlorobenzene. The working temperature was 150 C. The molecular weights were calculated according to relative calibration with polystyrene standards. UV-Vis absorption spectra were measured with a Perkin Elmer Lambda 900 UV-Vis-NIR absorption spectrometer. Square wave voltammetry (SWV) measurements were carried out on a CH-Instruments 650A Electrochemical Workstation. A three-electrode setup was used with platinum wires for both the working electrode and counter electrode, and Ag/Ag + was used for the reference electrode calibrated with a ferrocene/ferrocenyl couple (Fc/Fc + ). A 0.1 M nitrogen-saturated solution of tetrabutylammonium hexa-uorophosphate (Bu 4 NPF 6 ) in anhydrous acetonitrile was used as the supporting electrolyte. The polymer lms were deposited onto the working electrode from chloroform solution. Tappingmode atomic force microscopy (AFM) images were acquired with an Agilent-5400 scanning probe microscope using a Nanodrive controller with MikroMasch NSC-15 AFM tips and resonant frequencies of $300 kHz. Transmission electron microscopy (TEM) was performed with a FEI Tecnai T20 (LaB 6 , 200 kV). Without LiF/Al electrode deposition, the active layer was placed onto a copper grid using an aqueous dispersion of PEDOT:PSS. The sample was dried at room temperature before the TEM experiments were performed.
To a solution of thieno[3,2-b]thiophene (2.10 g, 15.0 mmol) in anhydrous THF (30 mL) at À30 C was added n-BuLi (6.3 mL, 2.5 M in hexane), the mixture was kept at À30 C for 0.5 h and warmed to 0 C. Then a solution of anhydrous ZnCl 2 (2.25 g, 16.5 mmol) in THF (10 mL) was added slowly. Aer the addition, the mixture was stirred at 0 C for 0.5 h. Finally, diethyl 2,5dibromoterephthalate (1) (2.38 g, 6.25 mmol) and Pd(PPh 3 ) 4 (0.14 g, 0.12 mmol) were added to the mixture. The reaction mixture was reuxed for 24 h. Aer cooling to room temperature, the excess solvent was distilled and the resulting solid was washed successively with ethanol. The residue was collected and puried by column chromatography with 1 : 2 (v/v) ethyl acetate-hexane as the eluent to give the nal compound as a yellow solid (2.19 g, 70.2%). 1
Compound 3
To a solution of compound 2 (1.50 g, 3.0 mmol) in anhydrous THF (50 mL) was added a solution of freshly prepared 3-hexylphenyl magnesium bromide from 1-bromo-3-hexylbenzene (3.62 g, 15.0 mmol) in anhydrous THF (30 mL). The solution was reuxed for 12 h. Aer cooling to room temperature, the organic layer was extracted with ethyl acetate (100 mL), washed successively with saturated brine and then dried over anhydrous MgSO 4 . The residue was puried by column chromatography with 1 : 15 (v/v) ethyl acetate-hexane as the eluent to afford a crude product as a dark red solid (2.04 g, 64.4%).
Compound 4
Compound 3 (2.0 g, 1.9 mmol) was dissolved in warm glacial acetic acid (100 mL) and conc. H 2 SO 4 (2 mL) was added slowly. The reaction mixture was reuxed for 30 min. Aer cooling to room temperature, the organic layer was extracted with dichloromethane (50 mL), washed successively with 1 M K 2 CO 3 aqueous solution and then dried over anhydrous MgSO 4 . The residue was puried by column chromatography with 1 : 5 (v/v) dichloromethane-hexane as the eluent to give the nal compound as a light yellow solid (1.78 g, 92.2%). 1
Monomer 2 (M2)
To a solution of compound 4 (1.02 g, 1.0 mmol) in anhydrous THF (10 mL) was added n-BuLi (1.0 mL, 2.5 M in hexane) at À30 C. The reaction mixture was stirred at À30 C for 0.5 h and then warmed to room temperature for another 2 h. Aer that, it was cooled to À30 C again and Me 3 SnCl (3.0 mL, 1 M in hexane) was added in one portion. The reaction mixture was stirred at room temperature overnight and then poured into water. The organic layer was extracted with diethyl ether (50 mL), washed successively with water, and dried over anhydrous MgSO 4 General procedure for polymerization 0.15 mmol of dibromo-substituted monomer, 0.15 mmol of bis(trimethylstannyl)-substituted monomer, tris(dibenzylideneacetone)dipalladium(0) (Pd 2 (dba) 3 ) (2.75 mg) and tri(o-tolyl) phosphine (P(o-Tol) 3 ) (3.65 mg) were dissolved in anhydrous toluene (12 mL) under a nitrogen atmosphere. The reaction mixture was reuxed with vigorous stirring for 24 h. Aer cooling to room temperature, the polymer was precipitated by pouring the solution into acetone and was collected by ltration through a 0.45 mm Teon lter. Then the polymer was washed in a Soxhlet extractor with acetone, diethyl ether and chloroform. The chloroform fraction was puried by passing it though a short silica gel column and then precipitated from acetone again. Finally, the polymer was obtained by ltration through 0.45 mm Teon lter and dried under vacuum at 40 C overnight.
PSC fabrication and characterization
The structure of polymer solar cells was Glass/ITO/PEDOT:PSS/ polymer:PC 71 BM/LiF/Al. As a buffer layer, PEDOT:PSS (Baytron P VP Al 4083) was spin-coated onto ITO-coated glass substrates, followed by annealing at 150 C for 15 minutes to remove water. The thickness of the PEDOT:PSS layer was around 40 nm, as determined by a Dektak 6 M surface prolometer. The active layer consisting of polymers and PC 71 BM was spin-coated from o-dichlorobenzene (oDCB) solution onto the PEDOT:PSS layer. The spin-coating was done in a glove box and they were directly transferred to a vapor deposition system mounted inside of the glove box. The thicknesses of active layers were in the range of 85-95 nm. LiF (0.6 nm) and Al (100 nm) were used as the top electrodes and were deposited via a mask under vacuum onto the active layer. The accurate area of every device (4.5 mm 2 ), dened by the overlap of the ITO and metal electrode, was measured carefully by microscope. The PCEs were calculated from the J-V characteristics recorded by a Keithley 2400 source meter under the illumination of an AM 1.5G solar simulator with an intensity of 100 mW cm À2 (Model SS-50A, Photo Emission Tech., Inc.). The light intensity was determined using a standard silicon photodiode. EQEs were calculated from the photocurrents under short-circuit conditions. The currents were recorded using a Keithley 485 picoammeter under monochromatic light (MS257) illumination through the ITO side of the devices.
Scheme 2 shows the synthetic routes for the polymers. The polymerizations were accomplished via Pd 2 (dba) 3 -catalyzed Stille coupling of the corresponding bis(trimethylstannyl)substituted monomers M1, M2 and dibromo-substituted monomers M3, M4, respectively. The same reaction time (24 h) was used for each polymer to study the inuences of the side chains and uorination. Aer polymerization, the crude polymer was washed in a Soxhlet extractor with acetone and diethyl ether for 24 h each. Aer that, the polymer was Soxhletextracted with chloroform. In this study, the properties of the polymer batches with the highest molecular weights were evaluated providing that they were readily soluble in chloroform, chlorobenzene, and oDCB. The molecular weights of the four polymers were measured by size exclusion chromatography (SEC) at 150 C with 1,2,4-trichlorobenzene as the eluent. The number-average molecular weights (M n ) of the meta-substituted polymer PIDTT-Q-m and PIDTT-QF-m are 41.8 and 17.2 kg mol À1 with polydispersity indexes (PDI) of 3.2 and 2.2, respectively. The para-substituted polymer PIDTT-Q-p and PIDTT-QF-p exhibit obviously lower M n of 33.2 and 14.1 kg mol À1 , with PDI of 2.9 and 2.2, respectively. The meta-substituted phenyl groups improve the solubility of the IDTT copolymers to provide higher molecular weight copolymers, which thus agree well with our previous study on IDT copolymers. 40 Similar results were also observed in thiophene-quinoxaline (TQ-m) copolymers. The kinked meta-side chains on quinoxaline moiety result in more twisted polymer backbone, which prevents aggregation in solution and decreases the enthalpy change DH diss during dissolution. From the equation: DG ¼ DH À TDS and T diss ¼ DH diss /DS diss , the dissolution temperature T diss is reached if the Gibbs free energy DG ¼ 0, Therefore, the lower DH diss would decrease T diss and improve the solubility of TQ-m polymers. 39 Compared to the non-uorinated analogues, both of the uorinated IDTT copolymers exhibit lower molecular weights, possibly owing to less coupling activity from increased steric hindrance between the IDTT and uorinated quinoxaline units.
Scheme 1 Synthetic routes of the monomers.
Absorption spectra
The normalized UV-Vis absorption spectra of the four polymers both in chloroform solution and in thin lm are shown in Fig. 1. All the polymers depict two distinct absorption bands in the wavelength range of 350-450 and 500-700 nm, corresponding to the p-p* transition and intramolecular charge transfer (ICT) between the IDTT and quinoxaline moieties, respectively. Compared to IDT-based polymers, horizontal extension of pconjugation by fastening outer TT units efficiently improve the ICT process, since the four IDTT-based polymers here exhibit stronger absorption in the long-wavelength regime. 19 The absorption maxima (l max ) of the polymers PIDTT-Q-p, PIDTT-Qm, PIDTT-QF-p and PIDTT-QF-m in dilute chloroform solutions are around 620 nm (Table 1). A well-resolved shoulder peak is recorded for each of the uorinated polymers. In thin lms, the absorption of the meta-substituted polymers PIDTT-Q-m and PIDTT-QF-m show bathochromic shis to 630 nm and 643 nm, respectively. A somewhat boarder absorption of the non-uorinated polymer PIDTT-Q-m is observed in comparison to the uorinated polymer PIDTT-QF-p. On the contrary, the parasubstituted polymers PIDTT-Q-p and PIDTT-QF-p exhibit little broader spectra in the solid state, l max values of which shi to 623 nm and 628 nm, respectively. The thin lm absorption spectra of the para-substituted polymers are comparable this time. These phenomena are related to the different aggregation and p-p stacking characteristics of the polymer backbones caused by the side chain and uorination. Similar absorption behaviors were also observed in our previous comparison between meta-and para-substituted IDT polymers. 40 The absorption edges of the lm spectra are located around 700 nm. Therefore, the optical band gaps extracted from absorption band edges are similar at around 1.78 eV (Table 1).
Electrochemical properties
The energy levels and band gaps of polymers are the key determinants of their photovoltaic performance, which can be estimated from their corresponding redox curves from electrochemical measurements. As shown in Fig. 2, square-wave voltammetry (SWV) was used to determine the oxidation (4 ox ) and reduction (4 red ) potentials of the four polymers. HOMO and LUMO levels were estimated from the peak potentials by setting the oxidative peak potential of Fc/Fc + vs. the normal hydrogen electrode (NHE) to 0.63 V, 48 and the NHE vs. the vacuum level to 4.5 V. 49 HOMO and LUMO levels were calculated according to the formula HOMO ¼ À(E ox + 5.13) eV and LUMO ¼ À(E red + 5.13) eV, where E ox and E red were determined from the oxidation and reduction peaks, respectively. 50 In comparison with the HOMO levels of IDT-based copolymers, those of IDTT-based copolymers are slightly higher in HOMO level due to the more electron-donating TT units on the backbone. The HOMO levels of polymers PIDTT-Q-p and PIDTT-Q-m are À5.64 eV and À5.68 eV, respectively. As expected, the two uorinated polymers PIDTT-QF-p and PIDTT-QF-m display low-lying HOMO levels of À5.76 eV and À5.82 eV, respectively, owing to the electronwithdrawing effect of the uorine atoms. 40,42,47 Compared to the corresponding para-substituted counterparts, the metasubstituted polymers PIDTT-Q-m and PIDTT-QF-m feature a little deeper HOMO levels, which can be attributed to the weaker electron-donating effect from the meta-alkyl-phenyl rings. 51 Since the V oc of BHJ PSCs is positively correlated to the energy difference between the HOMO level of electron donor and the LUMO level of electron acceptor, a low-lying HOMO level is a prerequisite for achieving a high V oc . The LUMO levels of the four polymers are À3.65 eV, À3.58 eV, À3.63 eV and À3.67 eV, respectively. The energy difference between the LUMO levels of the four polymers and that of PC 71 BM (À4.3 eV) are large enough for efficient exciton dissociation. According to the equation, E ec g ¼ e(E ox À E red ) (eV), the electrochemical band This journal is © The Royal Society of Chemistry 2014 gaps of the four polymers are around 2.0 eV. The uorinated polymers PIDTT-QF-p and PIDTT-QF-m exhibit slightly boarder band gaps, possibly due to the fact that they are less planar and lower molecular weights.
Photovoltaic properties
To investigate the photovoltaic properties of the four polymers, bulk-heterojunction polymer solar cells (BHJ PSCs) with a device conguration of ITO/PEDOT:PSS/polymer:PC 71 BM/LiF/Al were fabricated. Phenyl-C 71 -butyric acid methyl ester (PC 71 BM) was used as the electron acceptor due to its good absorption properties across the visible spectrum. 52,53 The measurements of photovoltaic performances were carried out under an illumination of AM 1.5G simulated solar light at 100 mW cm À2 . The optimized results were obtained by varying polymer:PC 71 BM weight ratios, active layer thicknesses, post-annealing conditions and additives. The corresponding PSC parameters (shortcircuit current density J sc , open circuit voltage V oc , and ll factor FF) are summarized in Table 2. The J-V curves are shown in Fig. 3(a), for PIDTT-Q-m:PC 71 BM and PIDTT-Q-p:PC 71 BM based devices, with the corresponding V oc being 0.81 V and 0.83 V, respectively. As anticipated, both devices based on uorinated copolymers feature a higher V oc of 0.95 V and 0.92 V, respectively, which agrees with their deeper HOMO levels. The metasubstituted uorinated polymer PIDTT-QF-m exhibits a slightly higher V oc compared to the para-substituted analogue PIDTT-QF-p, which is also observed in our previous study of IDT based copolymers. On the other hand, the non-uorinated polymer PIDTT-Q-p and PIDTT-Q-m based devices display inverse results with regards to their corresponding V oc . In this case, one possible reason may come from the minor Fermi level shi as a result of the formation of polymer and PCBM aggregates, which can be affected by the domain sizes of D-A components. 54 Without any post-treatment, the PSC using the meta-substituted polymer PIDTT-Q-m display a superior J sc of 11.8 mA cm À2 in comparison with other three copolymers, which results in a PCE of 6.7%. Since these four copolymers show comparable absorption spectra in thin lms, the enhanced J sc of the PIDTT-Q-m based devices could be ascribed to its higher molecular weight and more favorable nanostructure of its D-A components according to subsequent morphology study. Using 2.5% of DIO as an additives, a slightly higher PCE of 6.8% is recorded, which is one of only a few high-performance results among IDTT based copolymers. This result demonstrates that our side chain isomerization strategy for enhanced photovoltaic performance can be extended from previously studied IDT copolymers to IDTT copolymers. As shown in Fig. 3(b), external quantum efficiencies (EQE) were measured to evaluate the photoresponse of the PSCs. The enhanced quantum efficiency in the region of 400-500 nm is attributable to the absorption of PC 71 BM in the visible region. 52 The photocurrents calculated via integrating the EQE with an AM 1.5G reference spectrum are listed in Table 2, which agrees well with the corresponding J sc obtained from the J-V measurements. Among the four copolymers, the devices based on PIDTT-Q-m show higher photo conversion efficiencies over the whole visible region, implying more efficient charge collection and less charge recombination at the junction between the polymer PIDTT-Q-m and PC 71 BM. In addition, the corresponding PIDTT-Q-m device with 2.5% DIO demonstrates a slightly higher EQE, which is consistent with the J-V results.
Film morphology
To further understand the reasons for different photovoltaic performances of the four polymers, the morphology of the active layers were studied by atomic force microscopy (AFM) and transmission electron microscopy (TEM). AFM measurements were carried out to study the surface morphology of the blend layers. As shown in Fig. 4 (AFM), AFM images of the uorinated copolymers reveal very large polymer domains with a root mean square (RMS) roughness value of 6.9 nm for PIDTT-QFp:PC 71 BM and 10.1 nm for PIDTT-QF-m:PC 71 BM, respectively. Although the domain size of the non-uorinated PIDTT-Qp:PC 71 BM blend layer decreases, it also shows a rough surface with RMS roughness of 3.8 nm. For these three copolymers, we propose that the poor miscibility of the D-A components results in a non-optimal nanostructure, which in turn limits the photocurrent of the corresponding devices. The metasubstituted non-uorinated polymer PIDTT-Q-m:PC 71 BM blend layer however forms a continuous, ne phase-segregated morphology of its D-A components with a RMS roughness of 1.2 nm, which depicts a uniform and smooth surface. Aer mixing 2.5% DIO in the blend, the RMS roughness slightly increases to 1.9 nm, since the additive enables the development of short bril nanostructures with favorable grain boundaries. 55 To probe the morphology throughout the active layers, TEM was employed to investigate the real-space images in the polymer-fullerene blends. As shown in Fig. 4 (TEM), the PIDTT-Q-p/ PC 71 BM blend layer show large polymer fabric. Both of the PIDTT-QF-p/PC 71 BM and PIDTT-QF-m/PC 71 BM blend layers depict large phase separation, wherein 50-200 nm dark clusters are formed. Since PC 71 BM has a higher scattering density than the polymer, these dark clusters are ascribed to the aggregation of PC 71 BM. 56 For these three polymers, the dimensions of the phase separation and the discontinuous networks are much larger than the typical exciton diffusion length (10 nm), and thus the photogenerated excitons may recombine more easily during exciton diffusion. This may result in poor exciton separation, a low J sc and thereby limit the corresponding device performance. The PIDTT-QF-m/PC 71 BM blend layer has even bigger domain size compared to the PIDTT-QF-p/PC 71 BM blend, which is consistent with its lower PCE than the PIDTT-QF-p based device. On the contrary, a signicant reduction of the phase separation is observed in the PIDTT-Q-m:PC 71 BM blends. Both of the blend lms with and without DIO form continuous and tiny PC 71 BM aggregates. On the basis of the observations from AFM and TEM images, it is evident that the polymer PIDTT-Q-m/PC 71 BM has the most favorable morphology among these four blend layers. Continuous pathways have been formed properly, which subsequently enhance exciton diffusion as well as charge separation. As a result, more efficient exciton diffusion and dissociation in D-A phases correlate well with the superior J sc and PCE obtained for the PIDTT-Q-m based devices. Similar morphological properties were observed in our previous study of meta-substituted IDT-based copolymers. The different molecular conformation and solid state order of the IDTT copolymers may affect the lm morphology. From the previous density functional theory (DFT) calculations of the IDT copolymers, the meta-substituted side chains tend to wrap around the conjugated backbone, while the para-substituted side chains extend from the conjugated backbone in all directions. This extended side-chain conformation raises steric hindrance between adjacent chains, and thus a larger distance between polymer backbones were recorded via the grazing-incidence wide-angle X-ray scattering (GIWAXS). 40 Fig. 4 AFM topography (5 Â 5 mm 2 ) and TEM bright field images of optimized IDTT copolymer:PC 71 BM blended layers.
Conclusion
In summary, two pairs of IDTT-quinoxaline based copolymers with para-hexyl-phenyl or meta-hexyl-phenyl side chains on the IDTT units were synthesized and characterized to understand the effect of side chain isomerization. As anticipated, the BHJ PSCs based on the uorinated polymers PIDTT-QF-p and PIDTT-QF-m offer a high V oc of 0.92 V and 0.95 V, respectively. The meta-substituted polymers PIDTT-Q-m and PIDTT-QF-m offer better solubility and higher molecular weights. Although positioning the alkyl side chain in either the para-or metaposition of the phenyl ring has little inuence on the absorption and energy levels as well as the band gaps of corresponding copolymers, it plays an important role in forming a more appropriate and ne-grained nanostructure in the D-A blend. As a result, the PIDTT-Q-m:PC 71 BM device attains a superior photocurrent and a PCE as high as 6.8%. This result is among the highest efficiency achieved for IDTT copolymers used in conversional BHJ PSCs. It is also a comparatively high value for board band-gap polymers, with band gaps around 1.8 eV, which enables the PIDTT-Q-m polymer to be an appealing candidate for the front cell of tandem devices. Gratifyingly, we demonstrate that the meta-alkyl-phenyl substituted IDTT moiety is a promising building block for efficient organic photovoltaic materials. In our forthcoming work, through further structural optimization of the electron-withdrawing moieties and pendent side groups, it is entirely feasible to synthesize higher-performing IDTT-based conjugated molecules and polymers. Moreover, the here discussed structure-property correlations attest and extend our side-chain design strategy to IDTT-based copolymers, which is expected to be quite valuable for enhancing the photovoltaic performance of conjugated polymers. | 6,398 | 2014-10-21T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Ethnomedicinal application of species from genus Thymus in the Pirot County (Southeastern Serbia)
The species from genus Thymus are polymorphous plants from the family Lamiaceae, spread in Serbia in many species, subspecies, varieties, and forms. The aerial parts of species from genus Thymus have a long tradition of being used worldwide. The subject of this study was the research of ethnomedicinal application of thyme in Pirot County (Southeastern Serbia). Ethnomedicinal surveys were conducted among the population in four municipalities: Pirot, Babušnica, Bela Palanka, and Dimitrovgrad. It was noticed that 56.9 % of interviewed persons use the species of genus Thymus in the Pirot County. They are usually used in the county’s folk medicine in the form of herbal tea against colds to relieve dry and spastic cough, especially in bronchitis and asthma, and for sedation.
INTRODUCTION
The species from genus Thymus are perennial herbs or semishrubs of the family Lamiaceae. In Serbia, they are highly polymorphous species, spread in 30 species, and many subspecies, varieties, and forms occurred differing in size, branching, and hairiness of the stems, the colors, the shape, the size of the hair of the leaves, and other morphological characteristics (Diklić, 1974;Jančić et al., 1995). The genus Thymus is very complex from the taxonomical and systematic point of view, demonstrating significant polymorphism in morphological characteristics and the composition of essential oils (Jarić et al., 2015a).
Thymus species have been used since ancient times to treat diseases of the respiratory and digestive systems (Zarzuelo and Crespo, 2002). In ancient Egypt, they were used to make perfumed balms for embalming and medical purposes, and in ancient Greece, according to Dioscorides, thyme was used to treat asthma and loosen congestion in the throat stomach (Jarić et al., 2015a). The pharmacological records of the Chilandar Medical Codex (15 th and 16 th centuries) mention the use of thyme as a remedy against headaches caused by colds, laryngitis, and diseases of digestive organs and as an antitussive (Jarić et al., 2014).
The aerial part of species from genus Thymus have applications not only in folk medicine but also in modern medicine. It has antiseptic, fungicide, expectorant, spasmolytic, carmina-tive, sedative, diuretic, and diaphoretic activities (Aneva et al., 2018). Due to its pharmacological characteristics, the essential oil of species from genus Thymus represents an important natural resource for pharmaceutical industry, beside its antioxidnant and antimicrobial properties (Nikolić et al., 2019). Ilić et al. (2017) have evaluated the antibacterial and streptomycinmodifying activity of Thymus glabrescens essential oil, and its components geraniol, geranyl acetate, and thymol. In this study it was noticed that all substance-streptomycin combinations produced antagonistic interactions, and combinations between geraniol and thymol showed dominant additive effect. Species from genus Thymus are also the sources of natural products for nutritional supplements or functional food components in the dietary industry Jarić et al. (2015a);Nikolić et al. (2019). In Pirot County (Southeastern Serbia), the species from genus Thymus are widespread at meadows and pastures, with collecting period May-September. The following compounds have been previously reported in Thymus spp.: geraniol, geranyl acetate and thymol as the main components of the essential oil, tannins, bitter substance, flavonoids (Marković et al., 2010) . A detailed review of aromatic plants presence in investigated and described plant communities on the Vidlič Mountain in the Pirot County was presented by Marković et al. (2019). They have noticed the following species from the genus Thymus: Thymus striatus Vahl., Thymus glabrescens Willd., Thymus praecox Opiz subp. jankae (Čelak.) Jalas, Thymus pulegioides L. genus Thymus, also known as "babina dušica", "dušičina", "majčina dušica," or "majkina dušica" to the local population of study area. This paper will discuss species' application from the genus Thymus based on a population survey in the studied area: Pirot, Babušnica, Bela Palanka, and Dimitrovgrad. Also, a comparison with other regions in Serbia was performed.
METHODOLOGY
The study area includes four municipalities Pirot, Bela Palanka, Babušnica, and Dimitrovgrad in Southeastern Serbia ( Figure 1). It covers 2761 km 2 (Pirot County GIS, 2019). The climate is temperate continental, with the transitional climatic variations to sub-mountain and mountain climate at altitude over 600 meters (Marković et al., 2010). Lists of medicinal plants in the Pirot County were recorded by Randelović et al. (1991), and Marković et al. (2010), while on the Vidlič Mountain in the study area were reported 60 aromatic plants were recorded by Marković et al. (2019;2009). Information collected in the form of a questionnaire provided data about medicinal plants' knowledge among the local community at Pirot County. Data gathered from the participants were the common name of plants, the disease for which the plant is used, the plant part they use, and preparation forms. Special selection criteria were not carried out in the choice of the informants. Plant species were identified according to Josifović (1970Josifović ( -1986, Jordanov (1963Jordanov ( -1979, and Tutin et al. (1964Tutin et al. ( -1980. The voucher specimens of collected plant material deposited in HMN herbarium are presented in Table 1. The nomenclature of the taxa listed, given at subspecies level, was compiled from database: The EURO+MED PlantBase -the information resource for Euro-Mediterranean plant diversity (Euro+Med, 2006-). The results were systematized using Microsoft Excel and presented in Table 2 with the frequency of forms of application for traditional use, in Table 3 systematized regarding therapeutic groups with number of use reports per indication. Comparison with other regions in Serbia was also done using tabular overview (Table 4).
Quantitative analysis
It was noticed that applications of species from genus Thymus (Thymus spp.) are well known among population of the Pirot County. A total of 359 of 631 interviewed persons i.e. 56.9 % respondents mentioned "babina dušica", dušičina", "majčina dušica" or "majkina dućica" and their medicinal usage (Table 2,3), out of which 301 (83.85 %) of them were of Serbian nationality, 55 (15.32 %) were of Bulgarian nationality, and 3 (0.83 %) were Roma. Among respondents, a total of 180 were male, and 179 were female. In Pirot municipality, Thymus spp. were known to 198 (55.15 %) respondents, in municipality Babušnica 46 (12.81 %), in municipality Bela Palanka 73 (20.34 %) and municipality Dimitrovrgad 42 (11.70 %) interviewed people. The age of respondents who mentioned Thymus spp. was from 16 to 85. The majority of interviewed persons mentioned the use of the aerial part of the Thymus spp. in the form of tea in the treatment of common cold (149 persons, 41.50 %), cough (60 persons, 16.71 %), and for sedation (42 persons, 11.70 %). In the disease prevention, Thymus spp. were mentioned by 19 persons, and the same number of respondents were familiar with its application against stomach diseases, and eight respondents against lung diseases (asthma, bronchitis). In the treatment of sore throat Thymus spp. was mentioned by seven persons, and the same number of people were familiar with their effect on stomach.
Five respondents mentioned application in nutrition as a spice. The same number of respondents were familiar with the usage of Thymus spp. in treating bronchitis and respiratory diseases (each one). The application for improving the immune system was mentioned by three respondents. The same number of persons was familiar with their effect against high temperature and the heart and nerves (sedative effects). Likewise, three people mentioned "majčina duišica" or "dušičina", but did not know its use. Two persons mentioned improving hearing, and the same number of respondents were familiar with its effect against headaches, against high blood pressure, and kidney and bladder diseases. One respondent mentioned usage of "majčina dušica" in the form of tea for rinsing the oral cavity, against thyroid diseases, one as an antibacterial antiseptic, for the treatment of influenza, against menstrual pain, against diabetes, against inflammation, and one was familiar with usage in the filling the pillow against insomnia and for good sleep.
The most common form of usage is the form of tea for oral application. Rarely used forms are extract, dry herba, and form of oil. Extract in alcohol is used against cough, dry herba in the nutrition as a spice, or for filling the pillow against insomnia. The form of oil is used for sedation.
Comparative ethnopharmacological analysis
The results of usage of species from genus Thymus of previous studies in neighboring regions in Serbia are presented in Table 4. Table 1 Results presented in Table 2 Presented data Furthermore, Matejić et al. (2020) were identified Thymus praecox Opiz for Timok and Svrljig region. However, they mentioned genus level (Thymus spp.) because there are other species from genus Thymus which may be used and have the same effect. The present study results are in accordance with the findings of , and Matejić et al. (2020) probably as a result of proximity of investigated areas having in mind geographical locations of the geographical locations of the Svrljig, Timok, Rtanj Mountain, and Pirot County, which is probably caused by the proximity of these investigated areas. The following therapeutical applications of species from genus Thymus in the Pirot County systematized in therapeutic groups, are different and new from the above-mentioned ethnopharmacological studies in Serbia: • Nervous system: epilepsy, filling the pillow against insomnia.
• Inflammation: rinsing of the oral cavity, inflammation in general.
• Cardiovascular system: for the heart, high blood pressure.
• Urogenital system: kidney and bladder diseases.
CONCLUSION
Based on the results of interviews with the local population in the Pirot conducted in this study, it can be concluded that species from genus Thymus commonly known as "babina dušica", "dušičina", majčina dušica", or "majkina dušica" are well known to people living in the rural areas, and they are used for indications from various therapeutic areas. The aerial part of Thymus spp. is frequently used in the form of tea for the treatment of cold, cough, and for sedation, and rarely in disease prevention, against stomach diseases, lung diseases, against sore throat, for the stomach, against respiratory diseases, improvement of the immune system, against high temperature, for the heart, for nerves, improving hearing, against headache, against high blood pressure, for rinsing the oral cavity, against thyroid diseases, as an antibacterial antiseptic, against epilepsy, against influenza, against menstrual pain, against inflammation, against diabetes. Except as tea, the species from genus Thymus are used as an extract against cough, dry herba is used in the nutrition as a spice or for filling the pillow against insomnia, and oil is used for sedation. | 2,410.8 | 2020-11-26T00:00:00.000 | [
"Biology"
] |
Epigenetic silencing of the non-coding RNA nc886 provokes oncogenes during human esophageal tumorigenesis.
nc886 (= vtRNA2-1 or pre-miR-886) is a recently discovered noncoding RNA that is a cellular PKR (Protein Kinase RNA-activated) ligand and repressor. nc886 has been suggested to be a tumor suppressor, solely based on its expression pattern and genomic locus. In this report, we have provided sufficient evidence that nc886 is a putative tumor suppressor in esophageal squamous cell carcinoma (ESCC). In 84 paired specimens from ESCC patients, nc886 expression is significantly lower in tumors than their normal adjacent tissues. More importantly, decreased expression of nc886 is significantly associated with shorter recurrence-free survival of the patients. Suppression of nc886 is mediated by CpG hypermethylation of its promoter, as evidenced by its significant negative correlation to nc886 expression in ESCC tumors and by induced expression of nc886 upon demethylation of its promoter. Knockdown of nc886 and consequent PKR activation induce FOS and MYC oncogenes as well as some inflammatory genes including oncogenic NF-κB. When ectopically expressed, nc886 inhibits proliferation of ESCC cells, further demonstrating that nc886 could be a tumor suppressor. All these findings implicate nc886 as a novel, putative tumor suppressor that is epigenetically silenced and regulates the expression of oncogenes in ESCC.
INTRODUCTION
We have recently identified a 101 nucleotide (nt) long non-coding RNA (ncRNA), nc886 (also prematurely named as vtRNA2-1 or pre-miR-886), that is ubiquitously expressed in normal tissues. nc886 could be a tumor suppressor as suggested by several lines of observations. First, its expression is decreased in cancer cell lines from several tissue origins [1,2]. Second, a CpG island at its promoter region is hypermethylated in lung cancer and acute myeloid leukemia [3,4]. Third, its genomic locus at human chromosome 5q31 is frequently deleted in leukemia [5,6].
Thus far, nc886's known function is a cellular RNA ligand and inhibitor of PKR (Protein Kinase RNAactivated), a pleiotropic protein implicated in cellular defense against virus, stress responses, inflammation, and tumorigenesis (reviewed in [7]). Knockdown of nc886 activates PKR and ectopic expression of nc886 represses the interferon response triggered by double-stranded RNA (dsRNA), a canonical PKR activating ligand [1,2,8]. Presumably, nc886's function in normal cells is to adjust a cellular level of tolerance to diminutive triggers which should be normally insignificant so that signaling pathways and metabolism are not disturbed. However, its molecular roles in tumor development are currently unknown.
Tumorigenesis is a multi-step process driven by genetic/epigenetic alterations causing activation of oncogenes as well as inactivation of tumor suppressor genes. Activation of oncogenes can be driven also by extracellular signals or environmental cues. For instance, the expression of FOS and MYC oncogenes are induced by growth factors [9], oxidative stress [10], dsRNA, and viral infection [11]. Another example is oncogenic NF-κB, whose activation in cancer is attributed mostly to pro-inflammatory stimuli, but rarely to genetic/epigenetic alterations (reviewed in [12]).
Esophageal cancer (EC) is one of the most malignant and dismal prognostic tumors, ranked eighth in incidence rate and sixth in cancer-related death worldwide [13]. EC is classified into two major histopathological types: esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EAC). These two subtypes differ in carcinogenesis, cancer genetics, prognosis and pattern of recurrence [14]. ESCC is dominant over EAC worldwide. In case of EAC, its pre-malignant stage is metaplasia such as Barrett's esophagus which is most likely caused by chronic exposure to acid and bile reflux. However, the molecular mechanism of ESCC carcinogenesis is still elusive, except for some information that its etiology is correlated with smoking and consumption of hot tea. Furthermore, lack of good diagnostic markers and treatment strategies has rendered ESCC a major challenge in clinic. As an endeavor to understand ESCC, we investigated nc886 in this study.
Suppression of nc886 expression in ESCC is caused by CpG hypermethylation.
As the first step to investigate nc886 in EC, we measured the expression of nc886 in 84 pairs of a tumor tissue and its adjacent normal tissue from ESCC patients. nc886 expression was significantly suppressed in tumors (Fig 1A-B). It is worth pointing out that our measurement was done by Northern hybridization to ensure detection of nc886 as a 101 nt long ncRNA. Mature microRNA (miRNA) was detected in none of the samples (Fig S1). When the ESCC patients were stratified according to nc886 expression, its lower expression was significantly (P = 0.01) correlated with shorter recurrence-free or overall survival of the patients (Fig 1C and S2). Clinicopathological characteristics of the nc886 high-and lowexpression groups were summarized in Table S1. nc886 expression was also decreased in ESCC cell lines (TE-1, TE-8, TE-12, and TT) relative to a nonmalignant esophageal cell line Het-1A (Fig 1D). Of note, nc886 expression in Barrett's esophagus, metaplasia and EAC cell lines (BE-3, OE-33 and SK-4 respectively) remained as high as in Het-1A (Fig 1D).
Our inspection of the nc886 genomic region (by using http://cpgislands.usc.edu/, [15]) detected a strong CpG island (Fig 2A). Our pyrosequencing data in the ESCC patient samples indicated that the nc886 promoter region tended to be hypermethylated in tumors compared to adjacent normal tissues ( Fig 2B). This CpG hypermethylation was a cause of nc886's suppressed expression in ESCC, as evidenced by the following data. First, negative correlation was seen between CpG DNA methylation and RNA expression in the ESCC tumors and cell lines (Fig 2C-D). Second, treatment with 5-Aza-2'deoxycytidine (AzadC), a DNA methyltransferase inhibitor, led to de-repression of nc886 expression in TT and TE-8 cells (Fig 2E). Third, nc886 transcription was active from a transfected DNA fragment (649-mer DNA shown in Fig 2A) but was inactivated when the DNA fragment was in vitro methylated ( Fig 2F).
Induction of oncogenes upon nc886 knockdown.
To explore cellular events triggered by nc886 suppression, we examined global gene expression profiles in Het-1A and two ESCC cell lines (TE-1 and TE-8) by mRNA array after nc886 knockdown. Efficient knockdown was confirmed by Northern hybridization (representative Northern blots shown in Fig 3A) and by our array data in which nc886 was the most decreased gene in the three cell lines (Fig S3).
From gene expression data, we sorted genes according to fold-change and extracted a set of the most increased (or decreased) genes. For example, 378, 433, and 1055 genes were selected as significantly induced (P < 0.01 and higher than 0.5-fold) genes respectively from Het-1A, TE-1, and TE-8 cells, when nc886 expression was silenced (Fig 3B). While 33 genes were induced in all three cell lines, only 6 genes were commonly repressed ( Fig S4).
Therefore, we focused on induced genes, especially the 33 commonly induced genes ( Fig S3 and Table S2). More than one third were cancer-related ( Fig 3B) according to classified cancer genes in Cancer Portal (http://rgd.mcw.edu/wg/portals/). Notably, the 33 genes www.impactjournals.com/oncotarget hybridization of nc886 and 5S rRNA (for equal loading) in 84 pairs of an ESCC tumor and its adjacent normal tissue (designated T and N respectively). Sample identity is anonymously designated in #-number on the top of gels. RNA from a nc886 expressing cell line CRL2741 was included as a quality control [2]. Each band was quantified with AlphaView software 2.0.1.1 (Alpha Innotech, Santa Clara, CA). nc886 values were normalized to 5S rRNA values, and nc886/5S rRNA values of each tumor relative to its normal tissue is shown at the bottom ("Tumor/Normal"). "n.d." indicates "not determined" because 5S rRNA values could not be obtained due to RNA degradation. Green and red brackets on the top designate "Tumor/Normal" value less than 0.5 and more than 2, respectively.B. Comparison of nc886/5S rRNA values (from panel A) between tumors and adjacent normal tissues.C. Recurrence-free survival (RFS) curve. The 84 patients were classified into two sub-groups according to nc886 RNA expression levels (the "Tumor/Normal" value in panel A). High-and low-nc886 group was discriminated by the median value. Patients at risk were added below the survival curve.D. Northern hybridization of nc886 and 5S rRNA as a loading control in esophageal cell lines. Molecular sizes from Decade markers (= 10-nt ladder) are indicated on the right. included well-known oncogenes FOS, MYC, MAFB, and ID2, all of which have been shown to have a transforming ability when aberrantly expressed [16][17][18][19]. Their induction in the array data was confirmed by qRT-PCR measurement ( Fig 3C). FOS encodes a subunit of the activator protein-1 (AP-1) and MAFB is also a member of the AP-1 family. In accordance with the induction of FOS and MAFB expression, AP-1 activity was elevated as proven by our reporter assay in which luciferase expression was driven by a promoter containing AP-1 recognition elements ( Fig 3D). Oncogenic NF-κB was also activated, as shown by its target genes in the 33 genes ( Fig 3B) and by elevated luciferase expression whose promoter contained NF-κB target sites (Fig 3D).
FOS and MYC have been classically known as "immediate early genes" that surge quickly upon growth stimuli and then decline [9]. Interestingly, these genes are also induced by dsRNA [11], which is a viral replication intermediate and the best ligand for PKR activation. This induction is known to be abrogated by 2-aminopurine, an inhibitor for kinases including PKR [20]. It is also known that active PKR provokes the NF-κB pathway (reviewed in [7]). Our previous finding was that nc886 is a PKR inhibitor [1,2,8]. In our data here, nc886 knockdown activated PKR in the three cell lines, as shown by the appearance of phospho-PKR, an active form of PKR (Fig 3A). The induction of FOS, MYC and ID2 upon nc886 knockdown was significantly mitigated by siRNAmediated knockdown of PKR (Fig 3E). All these data clearly indicated that these oncogenes were activated through the nc886-PKR pathway and suggested nc886 as a tumor suppressor in ESCC.
nc886 knockdown also induced inflammation/ infection genes and pro-apoptotic genes (Fig 3B). This was not surprising, given that PKR was activated ( Fig 3A). As PKR activation typically occurs during viral assault, cells would have responded to nc886 knockdown as if virus infected and were committed to death before exhibiting any malignant phenotype (data not shown). This was corroborated by activation of the Toll-like receptor pathway ("TOLL_PATHWAY" in Fig 3F) in our pathway analysis. As in a cellular response to pathogen or stress, nc886 knockdown provoked many signaling pathways and consequently induced transcription factors (Fig S5).
Intersection of activated pathways in the three cell lines exhibited a significant overlap and yielded ten common Anti-oligos and siRNA (against PKR) were co-transfected into TE-8 cells as previously described [1]. Except that ACTB was used for normalization, all other descriptions are the same as panel C.F. Venn diagram of activated signaling pathways (BIOCARTA pathways) analyzed from the mRNA array data upon nc886 knockdown in the three cell lines. pathways ( Fig 3F). All the ten pathways involved AP-1, MYC, and NF-κB, in concordance with the increased expression of FOS, MAFB, and MYC. Since our data so far indicated that nc886 is a putative tumor suppressor in ESCC, we questioned whether re-expression of nc886 in ESCC cells would render any anti-proliferative and/or pro-apoptotic phenotype. To test this, we sought to construct a transgenic TT cell line stably expressing nc886. Despite multiple attempts in TT and also another cell line TE-12, we failed to recover those cells, indicating that nc886 expression was deleterious in ESCC cells. So, we made nc886 RNA by in vitro transcription and transfected it as an alternative way to assess nc886's acute effect. When transfected at sub-nanomolar levels, nc886 RNA significantly inhibited proliferation of TE-12 and TT cells (Fig 4A), both of which are ESCC cells expressing very low levels of nc886 (see panel C and also Fig 1D for their nc886 expression levels). In contrast, the same treatment did not inhibit proliferation of non-ESCC cells where nc886 expression was not suppressed (Het-1A and BE-3 cells in Fig 4B). We further measured apoptotic markers (caspase-3 and PARP cleavage in Fig 4C) and found that nc886 induced apoptosis in the ESCC cells but not in Het-1A cells. Our data proves nc886's potent and selective pro-apoptotic activity, in agreement with its tumor suppressor role in ESCC.
DISCUSSION
In this report, we obtained several lines of evidence supporting that nc886 is a putative tumor suppressor in ESCC. First, nc886 expression was significantly decreased in ESCC tumors by CpG hypermethylation at its promoter, a common mechanism to silence tumor suppressor genes. Second, the lower expression of nc886 was associated with poorer survival of ESCC patients. Third, nc886 knockdown activated oncogenes. Forth, re-introduction of nc886 inhibited the growth of ESCC cells. Our results are summarized in It is very important to point out that nc886's activities and features in ESCC were represented by its full length in size of 101 nts, but not by its miRNA product (miR-886). We have not detected miR-886 in our Northern blots. Also, our anti-oligo for nc886 knockdown was off-target from miR-886. Furthermore, miR-886 has been removed from the miRNA database (miRbase: www. mirbase.org). We infer that nc886's central portion is more important than either ends harboring mature miR-886-5p or -3p, because the central portion is the binding domain for PKR [8] and PKR activation was a reason for the induction of several oncogenes upon nc886 knockdown (Fig 3).
A striking result in this study was that nc886 knockdown induced many genes including the renowned oncogenes MYC and FOS (Fig 3). These genes do not share any sequence homology to nc886, and so nc886's regulatory action on these genes could not be through a miRNA mechanism (reviewed in [21]). Recently, nuclear ncRNA's role in regulation of gene expression through chromatin remodeling has been intensively studied (reviewed in [22]); however, this mechanism was not likely either because nc886 is exclusively in the cytoplasm [2]. Based on our data regarding PKR ( Fig 3A and E), it would be most reasonable to interpret that nc886 knockdown was a mimicry of viral infection and accordingly induced MYC and FOS as well as genes related to inflammation and infection such as oncogenic NF-κB. The causal relationship between inflammation and cancer is well documented from many studies and widely accepted (reviewed in [23]).
Our data also opens a possibility to use nc886 in clinical applications. nc886 RNA expression and/or its DNA methylation can be a prognostic marker for ESCC patients. nc886 RNA level is very abundant and easier to measure than miRNAs [2], because it is 101 nts long and so can be measured by regular qRT-PCR with two specific primers. Measurement of its CpG methylation can be a proxy marker, if RNA quality or tissue contamination is a concern. Detection of nc886 depletion by methylation in ESCC might classify the high risk patients with recurrence and identify candidates for therapy with peri-operative adjuvant treatment. Because patients with reduced nc886 expression would have earlier recurrence than others, it would be advisable to utilize more aggressive treatments such as chemotherapy or chemo-radiation after surgery. However, this approach needs to be tested in a prospective clinical study. Also, nc886's selective toxicity to ESCC cells (Fig 4) could be utilized in cancer treatment in the future.
Our work here is the first extensive study of nc886 in ESCC. Our expression and methylation data from copious patient samples indicate nc886's clinical significance. We speculate that epigenetic silencing of nc886 is a cell autonomous cue to provoke inflammation and promotes ESCC tumorigenesis. To the best of our knowledge, such a role for a ncRNA is unprecedented. All of our data here are new findings and thus leave many outstanding questions about nc886. The study of nc886 is just at the beginning stage and much more effort is needed for elucidation of its role and regulation in cancer, which should precede its clinical use.
Cell lines and tissue samples
Cell lines in this study were obtained from Drs. Xiaochun Xu and Julie J. Izzo at the University of Texas MD Anderson Cancer Center, Houston, TX and cultured as described in Supplemental Information. Cell lines were validated by STR DNA fingerprinting using the AmpF_ STR Identifiler kit (Applied Biosystems, Grand Island, NY) according to manufacturer's instructions. The STR
Figure 5: Cartoon summarizing results in this study
profiles were compared to known ATCC fingerprints (http://www.atcc.org/) and to the Cell Line Integrated Molecular Authentication database (CLIMA, version 0.1.200808, http://bioinformatics.istge.it/clima/) [24]. The STR profiles matched known DNA fingerprints or were unique.
ESCC patients included in this study were those who had thoracic EC and underwent complete esophageal resection with adequate lymph node dissection, but without any perioperative treatment such as chemotherapy or radiotherapy. Tissues were collected freshly within 30 minutes after surgical removal and were stored at -196 o C in the tumor bank at the National Cancer Center in Korea (NCC), after pathologist's review and macro-dissection. We chose 84 ESCC samples for which a tumor and its adjacent normal tissue were both present. Collection of human samples and protocols for investigation were approved by the Institutional Ethics Committee (No. NCCNCS-11-435) at NCC. Also, we obtained informed consent and agreement from patients. Epidemiological data were collected based on in-patient medical records in NCC.
RNA isolation and measurement
Total RNA from patient tissue samples and cell lines was isolated by Trizol reagent (Invitrogen, Carlsbad, CA). Northern hybridization and qRT-PCR measurement of nc886 and other genes were performed as previously described [2]. Sequence information on qRT-PCR primers is available upon request.
Pyrosequencing to measure CpG DNA methylation at the nc886 promoter region
Genomic DNA isolation and bisulfite-conversion were performed with a PureLink™ Genomic DNA kit (Invitrogen) and an EZ DNA methylation kit (Zymo Research, Orange, CA) respectively. Primers for pyrosequencing were as previously described [4].
Statistical analysis of patient data
Kaplan-Meier plots and the log-rank test were used to estimate difference in patient's prognosis between two groups. When comparing two values, we used Student's t-test for continuous values and Chi-square or Fisher's exact test for discrete values. In analyzing nc886 expression and methylation data, the paired t-test was applied to evaluate the significance in difference between tumors and adjacent normal tissues. Correlation between nc886 expression and methylation was calculated by Spearman correlation analysis. All statistical tests were two-tailed, and a P-value less than 0.05 was considered to be significant.
Reagents and antibodies
AzadC was purchased from Sigma-Aldrich (St. Louis, MO); CpG methyltransferase (M.Sssl) was from New England Biolabs (Ipswich, MA); Decade markers (= 10-nt ladder) for small RNA Northern and yeast tRNA were from Ambion (Carlsbad, CA); and protein and DNA size markers were from GenDepot (Barker, TX). The source of antibodies was described in [1,2].
In vitro methylated DNAs, and RNAs, transfection, and assays PCR amplification of "649-mer DNA" (illustrated in Fig 2A and used in Fig 2F) and its in vitro methylation and transfection was elaborated in Supplemental Information. Anti-oligos ("anti886 75-56" and "anti-vt 21-2"), siRNA against PKR, and in vitro transcribed RNA (nc886 and vtRNA1-1) were prepared as previously described [1,2,8]. Lipofectamine™ RNAiMAX reagent (Invitrogen) was used for transfection of anti-oligos; Lipofectamine™ 2000 reagent (Invitrogen) was used for 649-mer DNA and in vitro transcribed RNA. Detailed transfection protocols are described in Supplemental Information. Luciferase reporter assays and cell proliferation assays were performed as described previously [2]. The AP-1 reporter plasmid (a kind gift from Dr. Xiaoyong Bao at the University of Texas Medical Branch, Galveston, TX) encodes a firefly luciferase gene whose expression is driven by three copies of AP-1 recognition elements originally isolated from the IL-8 promoter.
mRNA microarray and pathway analysis
Transfection of "anti886 75-56" and "anti-vt 21-2" (for nc886 knockdown and control respectively) was performed in triplicate. Briefly, probe preparation and array run were done by using a TotalPrep™ RNA amplification kit and a HumanHT-12 v4.0 Expression BeadChip kit (Illumina, San Diego, CA) per the manufacturer's instructions. A more detailed protocol is described in Supplemental Information. The array data were deposited in the Gene Expression Omnibus (accession number GSE51732; Reviewer's link, "http:// www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=ulerguc evhsxbqp&acc=GSE51732"). For gene set and pathway analysis from the gene expression data, we used PAGE (Parametric Analysis of Gene set Enrichment) method with MSigDB (ver 3.0) gene sets [25,26]. | 4,641 | 2014-04-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Energy Transport in Trapped Ion Chains
We experimentally study energy transport in chains of trapped ions. We use a pulsed excitation scheme to rapidly add energy to the local motional mode of one of the ions in the chain. Subsequent energy readout allows us to determine how the excitation has propagated throughout the chain. We observe energy revivals that persist for many cycles. We study the behavior with an increasing number of ions of up to 37 in the chain, including a zig-zag configuration. The experimental results agree well with the theory of normal mode evolution. The described system provides an experimental toolbox for the study of thermodynamics of closed systems and energy transport in both classical and quantum regimes.
Energy transport and thermalization in nanoscale systems are of special interest as they pertain to the microscopic origins of statistical mechanics and the functionality of biological systems. The Fermi-Pasta-Ulam paradox, for instance, involves numerical simulations of a chain of oscillators where it was conjectured that nonlinearity in the potential would give rise to ergodicity and lead the system to eventually thermalize [1,2]. The simulation results prove otherwise and further work on the problem has led to discoveries of soliton solutions in related non-linear systems and the concept of dynamical chaos. In the context of energy transport in biological system, there has been a number of theoretical investigations how the high transport efficiency can arise in a molecule located in a noisy environment at the ambient temperature. The possible explanations include constructive interferences among the possible pathways aided by spatial robustness against decoherence [3] and inhibition of destructive interference via dephasing noise [4].
Chains of trapped ions allow for experimental investigation of energy transport phenomena and oscillator chain models [5]. In contrast to naturally occurring systems, the parameters such as the amount of present nonlinearity and decoherence can be tuned precisely with additional optical potentials. Recently, trapped ion chains have been proposed as a model system for study of multidimensional spectroscopic techniques commonly used to probe energy transport in photosynthetic molecules [6]. In addition to serving as a model system, ion chains may reveal a number of interesting deviations from expected thermodynamic behavior. When the extreme ions on either side of the chain are coupled to heat baths of different temperatures, one expects a non-linear temperature distribution across the ion chain [7]. The non-uniformity of ions trapped in the harmonic potential leads to nonextensive scaling of thermodynamic quantities and to eigenmodes that differ from phonon-like waves [8,9].
In this Letter, we present the observed energy transport dynamics in long trapped ion chains. We prepare an out-of-equilibrium state of the chain by rapidly imparting momentum onto a single ion at one end of the chain. We then monitor the energy of the ions in the chain as the initial excitation propagates, leading to multiple revivals of energy. The energy revivals persist for a surprisingly long time indicating that the system does not thermalize on the experimental timescale. Our work extends the results obtained for two ions in the quantum regime [10] to much longer chains of up to 37 ions. The resultant dynamics are more complex as they involve participation of a greater number of normal modes of the chain.
In order to observe the energy propagation, both the excitation and the energy readout have to be faster than the coupling between the ions. Intuitively, the dynamics can be understood in terms of the eigenmodes of the ion chain. When a single ion on the end of the chain is excited, the system is not in an eigenstate of the full coupled set of oscillators. Rather, the excitation creates a superposition of the eigenmodes which then evolve at their eigenfrequncies. The time evolution results in the local excitation transferred to other ions in the chain. Rephasing of the participating eigenmodes corresponds to energy revivals. The same reasoning applies in the quantum regime for propagation of a single local phonon observed in [10].
The experiment proceeds as follows: a chain of N Ca + ions is confined in a harmonic potential of a linear Paul trap with trap frequencies (ω x , ω y , ω z ) = 2π × (2.25, 2.0, 0.153) MHz. A laser at 397 nm is red-detuned with respect to the S 1/2 -P 1/2 transition to perform Doppler cooling of the whole chain. An intense beam at 397 nm is tightly focused onto a single ion on one end of the chain, as shown in Fig. 1.
We rapidly add energy using a technique of pulsed excitation: the intensity of a focused beam is switched off and on with the frequency of the local mode [11,12]. This resonant process results in a quadratic increase in energy of the excited ion with the number of applied kicks [5]. After 10 µs of pulsed excitation, we stop and let the system evolve freely for a time τ . Then the energy of individual ions in the chain is measured with a laser at 729 nm tuned to the red motional sideband of the narrow quadrupole transition between |g = |S 1/2 ; m J = −1/2 and |e = |D 5/2 ; m J = −5/2 [13]. We analyze the dynamics in terms of the eigenmodes of the ion chain by considering that the ions are confined in a potential V , which is the sum of the harmonic trap potential and the Coulomb interaction. Minimizing potential energy V with respect to the coordinates of every ion i, (x i , y i , z i ), yields the equilibrium positions x 0 i , y 0 i , z 0 i . We treat each ion as an individual oscillator coupled to the other ions in the chain. Considering the motion in only one of the radial directions, x, the potential energy V can be expanded for small displacements about the equilibrium, where e is the electron charge and m is the ion mass [14,15]. The local oscillation frequency is modified by the repulsive force from the other ions in the chain and their effect drops off as the cube of the inter-ion distance. The ions are closer to each other in the center of the chain, hence the local oscillation frequency will be minimal for the middle ion. The local displacements q i are coupled, and the system may be diagonalized in terms of the N radial normal modes of the ion chain [14]. We enumerate the normal modes v n in the order of decreasing eigenfrequencies such that v 1 always refers to the center-of-mass mode with the corresponding eigenfrequency of ω x . We Locally excited normal modes Fig. 2 for chains of increasing number of ions. We see that the pulsed excitation creates a superposition of eigenmodes of chain motion, which then evolve at their corresponding eigenfrequencies. However, even for long chains, only the first 10 modes play a large role in determining the subsequent time evolution.
In order to precisely measure the energy of the system after an evolution time τ , we consider the quantized version of the system with the potential energy described by Eq. (1) with the Hamiltonian H x written in terms of local phonons creation and annihilation operators at site i denoted a † i , a i , respectively [16,17]: where we have neglected fast-rotating non-energy conserving terms. The site-dependent oscillation frequency and the tunneling amplitude are given by: where ∆z ∼ O(1) is a dimensionless separation of ions in the units of the characteristic length scale [14]. We find that for the experimental trap frequencies, the tunneling matrix element between the leftmost ion and its neighbor, t 12 , ranges from 2π × 6.7 kHz for a chain of 5 ions to 2π × 21.1 kHz for a chain of 25 ions. After performing Doppler cooling, the local radial mode of every ion is in a thermal state with a temperaturē n. The density matrix of the ion i, ρ th i , can be expressed in terms of the local phonon number basis |n i : The pulsed excitation process is modeled as a displacement operator D(α) applied onto the leftmost ion (i = 1) in the chain [18] for some complex amplitude α = |α|e iφ resulting in a displaced thermal state of motion. Experimentally we do not control the phase of the pulsed laser φ, yielding a diagonal density matrix of the first ion ρ 1 after averaging over φ: with the occupational probabilities given by [19]: n+1 L n − |α| 2 n(n + 1) (7) where L n is the Laguerre polynomial of the n th degree. We measure the displacement |α| of the ion i by driving the red motional sideband of the transition between |g , and |e . This interaction couples the electronic and motional states of the ion in the form |g |n i and |e |n i − 1 with Rabi frequency Ω n,n−1 , which depends on the particular motional state n [13]: where Ω 0 is a scale of the coupling strength and η is the Lamb Dicke parameter. In the regime of our experiment, the Rabi frequency Ω n,n−1 increases with the n, and the energy can be determined by monitoring the strength of the sideband interaction. Specifically, we measure the probability to find the ion in the electronic ground state |g : First, we extract the initial temperature,n, without pulsed excitation. Then, with pulsed excitation added, the knowledge ofn, the laser intensity, and the trapping parameters allows us to calculate the displacement |α| from the electric ground state probability P g (t). The experimental excitation time is fixed at t = 7.5 µs, short compared to the coupling in order to address only the local mode of motion. The short durations of both the energy readout and the pulsed excitation (10 µs) are crucial to observe the energy dynamics as these operations are much faster than the characteristic coupling time of the leftmost ion 2π/t 12 .
The results for the 5-ion chain and the comparison to theory are presented in Fig. 3. The experimental data are in good agreement with a molecular dynamics simulation where the leftmost ion is initially displaced in the radial direction. The simulation takes into account the full potential V , including non-linearities from the Coulomb repulsion and the driven motion of ions in the Paul trap. It has no free parameters and uses independently measured trap frequencies as inputs. The simulation results show that the dynamics of the time evolution do not change with a decreased initial excitation amplitude, confirming the absence of non-linear effects and justifying the normal mode picture.
We repeat the experiment with progressively longer chains such that a greater number of normal modes is populated by kicking the leftmost ion. The energy of the rightmost ion during the sequence of experiments is presented in Fig. 4. Similar revivals can be identified for increasing chain length. This is explained by the fact that the eigenfrequencies of the populated 10 normal modes have only a weak dependence on the ion number. For example, the splitting between the eigenfrequencies of the modes v 1 and v 5 increases by ∼ 3% as the length is increased from N = 5 to N = 25. It can be seen that the revival features become sharper for longer chains: more normal modes participate in the dynamics and the ions are spaced closer together increasing the coupling rate. The energy of the rightmost ions measured for progressively longer chains. The rate of energy transfer across the chain increases slightly to the reduction in the inter-ion distance but has a weak dependance on the ion number. Due to the faster coupling, the feature size of the revivals decreases for longer chains. The average measured energy drops: with a greater number of participating normal modes, the excitation is distributed among more ions.
The faster coupling also leads to a higher energy of the rightmost ion for evolutions times τ ∼ 0 µs: some energy has already transferred from the excited ion to the rightmost ion by the time the pulsed excitation is complete.
Even for very long chains, the energy revivals persist for a long time compared to the coupling strength. This is illustrated by Fig. 5. This measurement was performed with 37 ions in a partially zig-zag configuration as shown in Fig. 1b. The energy revivals continue even after an evolution time of τ = 40 ms. For times longer than 40 ms the dynamics wash out, likely due to the instability of the trapping frequencies. Consecutive measurements apart by 12 minutes for the evolution time τ = 40 ms revealed a 20 µs shift in the position of the revival peak, corresponding to 5 × 10 −4 change in the coupling strength. The trap frequencies, particularly the radial frequency ω x , are not expected to be stable at this level.
The measurement presented in Fig. 5 shows a large difference in the excitation between adjacent ions: the rightmost ion in the chain and its neighbor. While the energy efficiently transfers from the leftmost kicked ion to the rightmost ion, the ion second from the right does not get as energetic. This phenomenon follows from the normal mode decomposition of the initial excitation. In the local mode picture, it can also be seen that the efficient transfer of energy occurs because the kicked and the rightmost ion have the same local trap frequency leading to an onresonant coupling. However, the local frequency of the ion second from the right is different (by approximately the next neighbor coupling), leading to an off-resonant excitation.
In summary, we have presented experimental results measuring the transport of energy in chains of trapped ions. By exciting the ions and reading out the energy faster than the coupling, we are able to observe the energy propagation from the kicked ion. The dynamics observed for 5 ions agree well with numerical simulations and are explained by the normal mode decomposition of the initial excitation. This work enables the study of the Fermi-Pasta-Ulam problem in both classical and quantum regimes. We plan to investigate how the ion chain thermalizes by engineering additional non-linearities to alter the normal mode structure. Particularly in the quantum regime, it will be possible to use the demonstrated techniques to perform energy transport experiments where the size of the system would render numerical simulations unfeasible.
Delay time ⌧ (µs) Energy |↵| 2 FIG. 5. The energy revivals measured for the rightmost ion (circles) and ion second from the right (triangles) in a 37-long ion chain partially in the zig-zag configuration. The dynamics extend up 40 ms and are likely limited by trap frequency instabilities. The energy from the kicked ion does not efficiently transfer to the ion second from the right. The general rise in the measured energy corresponds to background heating during the free evolution. | 3,491.6 | 2013-12-20T00:00:00.000 | [
"Physics"
] |
Multidimensional computational study to understand non-coding RNA interactions in breast cancer metastasis
Metastasis is a major breast cancer hallmark due to which tumor cells tend to relocate to regional or distant organs from their organ of origin. This study is aimed to decipher the interaction among 113 differentially expressed genes, interacting non-coding RNAs and drugs (614 miRNAs, 220 lncRNAs and 3241 interacting drugs) associated with metastasis in breast cancer. For an extensive understanding of genetic interactions in the diseased state, a backbone gene co-expression network was constructed. Further, the mRNA–miRNA–lncRNA–drug interaction network was constructed to identify the top hub RNAs, significant cliques and topological parameters associated with differentially expressed genes. Then, the mRNAs from the top two subnetworks constructed are considered for transcription factor (TF) analysis. 39 interacting miRNAs and 1641 corresponding TFs for the eight mRNAs from the subnetworks are also utilized to construct an mRNA–miRNA–TF interaction network. TF analysis revealed two TFs (EST1 and SP1) from the cliques to be significant. TCGA expression analysis of miRNAs and lncRNAs as well as subclass-based and promoter methylation-based expression, oncoprint and survival analysis of the mRNAs are also done. Finally, functional enrichment of mRNAs is also performed. Significant cliques identified in the study can be utilized for identification of newer therapeutic interventions for breast cancer. This work will also help to gain a deeper insight into the complicated molecular intricacies to reveal the potential biomarkers involved with breast cancer progression in future.
Generation of mRNA-miRNA-lncRNA-drug interaction network followed by hub RNA, module identification and TF analysis
In order to understand the interaction in the complex linkage of miRNA-mRNAs-lncRNA-drugs, the cytoscape v3.9.0 27 software was utilized effectively.It is a freely accessible platform to perform complex cluster-based networks for multiple bioentities approach.The software enables visualization of complex networks comprising of multiple bioentities.From the 'Tool" dropdown menu, 'Merge" attribute has been used utilized to merge the small networks formed namely, mRNA-mRNA, mRNA-lncRNA, mRNA-drugs, miRNA-drugs, lncRNA-drugs and miRNA-lncRNA.Consequently, the Maximal Clique Centrality (MCC) ranking method of the 'cytohubba' plugin has been utilized for extracting the hub mRNAs from interaction network.Additionally, the 'MClique' plugin was employed to obtain cliques.The MCODE plugin is employed on the interaction network to obtain the top two subnetworks (based on MCODE Score of 5.438 and 5.417) to carry out transcription factor (TF) analysis, respectively.The top BC carcinogenic conditions in which the highly expressed hub genes (cut-off degree of 30 for significant genes) are obtained from the topological parameters.For the mRNAs from the top two subnetworks from MCODE results, interacting miRNAs are searched from the ENCORI database.Once the mRNAs and miRNAs are retrieved, the interacting TFs for mRNAs and miRNAs are retrieved from the TRRUST (https:// www.grnpe dia.org/ trrus t/) 28 and TransmiR (https:// www.cuilab.cn/ trans mir) 29 databases, respectively.To retrieve the hub TFs, the subnetworks are first merged and then MClique plugin is employed on the interaction network involving mRNAs, miRNAs and TFs.ChA3 (https:// maaya nlab.cloud/ chea3/) 30 TF analysis was done for all the TFs obtained from TRRUST and TransmiR to validate the hub TFs revealed in the Clique identification in MClique of cytoscape.To validate the significance of the TFs, CheA3 TF analysis of the TFs was done.
Status of various correlated gene regulation in diseased state
In TCSBN, the gene co-expression network for the 20 hub RNAs is built under the category of cancer tissue utilising the Breast (BRCA) dataset.Based on the hub RNAs retrieved from the interaction network, a backbone gene co-expression network is built using the Tissue/Cancer-Specific Biological Networks (https:// inetm odels.com/) 31 database.Among the top 20 hub RNAs, 11 were mRNAs.The adjusted p value is set to 0.05, the node limit is 25, correlation is set to both (positive and negative).
Expression, survival and oncoprint analyses of the RNAs
Following the retrieval of hub RNAs, the expression analysis for mRNAs and miRNAs is done using UALCAN (The University of ALabama at Birmingham CANcer).The UALCAN webtool (http:// ualcan.path.uab.edu/ cgibin/ ualcan-res.pl) 32 generally comprises of omics data related to cancer and provides efficient expression profiles for genes corresponding to protein-coding and non-coding RNAs (ncRNAs).The gold-standard metastatic dataset of TCGA corresponding to 'BRCA: breast invasive carcinoma' dataset is exploited effectively to retrieve the expression pattern of the hub RNAs.For the selections pertaining to sample types, BC subclasses and their subclass-associated DNA-methylation status, genes with p value < 0.05 were checked for their significant expression in BC.Subclass-based promoter methylation status is generally an epigenetic event in the initial phases of tumorigenesis and therefore has prognostic cancer biomarker potential 33 .In order for normal regulation of the genes, DNA methylation as well as structure of the chromatin play significant roles.Hence, DNA methylation status was also checked using UALCAN tool.The beta-value along the ordinate-axis of the methylation plots range from 0 (unmethylated) to 1 (full methylated mRNAs).Hypermethylation ranges from beta value of 0.7-0.5 while the 0.3-0.25 range of beta value indicates hypomethylation.
Further, survival analysis of mRNAs was performed using Oncolnc (http:// www.oncol nc.org/) 34 that provides an interactive platform correlating the survival data of cancer patients obtained from TCGA with mRNA, miRNA and lncRNA expressions.The patients' tumor samples were divided into high-and low-expression groups (n = 503) which were analyzed by log-rank test and statistical significance of the selected markers was confirmed with p value < 0.05.To validate the most significant survival biomarkers PrognoScan (http:// dna00.bio.kyute ch.ac.jp/ Progn oScan/) 35 and Fp tool (http:// dcv.uhnres.utoro nto.ca/ FPCLA SS) 36 scores were compared to the UALCAN p values for individual DEGs.The robust platform provided by PrognoScan makes it possible to assess prospective tumor markers and treatment targets, which would speed up the study of cancer.Finding the genes that are co-expressed with hub genes can be done using the in-silico method known as the Fp tool, which predicts high-confidence protein-protein interactions.Based on the total scores, the co-expressed genes were determined.
In order to achieve vital information on the genetic alteration of the DEGs in individual hallmarks, oncoprint analysis was performed with the help of cBioPortal server (https:// www.cbiop ortal.org/) 37 .The Oncoprint technique is particularly helpful for finding trends like co-occurrence and mutual exclusivity as well as for visualizing changes across a group of instances.In the end, two genes that do not co-occur in the same patient may be in the same pathway.Finding these genes makes it possible to develop artificially deadly treatments.It shows discrete gene values for all data types, including data with continuous values like mRNA expression (i.e., whether a gene is altered or not based on a predetermined threshold.This integrative platform provides high quality genetic profiles relative to the various alterations at molecular level.Samples across the TCGA dataset (Cell, 2015) was explored to see genetically altered mRNAs.This database reveals different chromosomal mutations with their chromosomal position and changes in base pair.In addition to the cBioPortal default layout, there are supplementary bar plots on either side of the heatmap that display the number of various modifications for each sample and each gene.
In this study we have incorporated this data in the form of a Circos plot using the web-based Circos tool (http:// mkweb.bcgsc.ca/ table viewe r/) 38 .
Enrichment analysis of the DEGs
EnrichR was used to conduct the functional enrichment analysis of the DEGs.This web-based tool (https:// maaya nlab.cloud/ Enric hr/) 39 Figure 1 schematically represents the framework of the overall study.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.www.nature.com/scientificreports/
Obtaining target RNAs and interacting drugs
Initially, 450 BC metastatic mRNAs are retrieved from HCMDB database followed by retrieval of 3159 BC associated differentially expressed mRNAs from GEPIA2.Comparing both the gene lists, 113 differentially expressed mRNAs involved in BC carcinogenesis.These 113 DEGs are found to be interacting with 614 common miRNAs retrieved from TargetScan, ENCORI and miRTarBase databases and 220 common lncRNAs of TarBase, lncBase (mRNA-lncRNA interaction data) and ENCORI (miRNA-lncRNA interaction data) are considered for the study.1049 drugs interacting with mRNA, miRNA and lncRNA were discovered from CTDbase, Sm2miR and D-lnc database, respectively.
BC-specific backbone PPI network
The backbone gene co-expression network for the top 20 hub DEGs provides the positively and negatively correlated genes involved in the network (Fig S3) using TCSBN.The correlated gene clusters are listed along with their p values in Tables S1 and S2.The positively correlated edges are represented as orange lines while the negative ones as blue lines.The nodes are in blue for both the correlation networks representing genes.The net correlation network is represented in Fig. 2.
Interaction network analysis using cytoscape v3.9 In cytoscape, the interaction data tables are loaded individually summing up to six different interaction tables: mRNA-miRNA, miRNA-lncRNA, mRNA-lncRNA, mRNA-small molecules, miRNA-small molecules and lncRNA-small molecules interaction network.The complex interaction network is generated by merging these individual interaction networks (Fig
Retrieval of sub networks from the interaction complex network
MCODE is implemented on the interaction network based on the haircut algorithm taking into consideration the parameters like a k core of 2, a node cut-off value of 0.2, and a maximum depth of 100.To create sub-networks, the top two clusters according to the clustering score were utilised (Fig. 4a) taking into consideration the top 20 hub RNAs (Fig S2) using cytoscape.The first module had cluster score of 5.438 encompassing 90 nodes and 242 edges followed by the module with a cluster score of 5.417 that was made up of 25 nodes with 65 edges.Gene ontology (GO) analysis of the top 2 modules revealed their involvement in salivary gland morphogenesis, lymph node development, pathway restricted SMAD protein phosphorylation, JAK-STAT cascade involved in growth hormone signalling pathway (biological processes), ion transmembrane transported activity and Ran GTPase binding (molecular function) and connexon complex, contractile fiber, pseudopodium, dendrite cytoplasm, axon and gap junction (cellular components) were the topmost ontologies enriched.
Construction of TF-miRNA-mRNA interaction network and TF analysis
The mRNAs involved in these two sub-networks (Fig. 4a) are further exploited to study a mRNA-miRNA-transcription factor (TF) interaction network.For the 8 common mRNAs involved, 39 interacting miRNAs and 56 interacting TFs were retrieved.The merged network for mRNA-miRNA-TF is generated (Fig. 4b).The MClique plugin of cytoscape is utilised to reveal that two of the 56 TFs (ETS1 and SP1) are present in two of the three generated cliques (Fig. 4c).Two out of the three cliques have TFs ETS1 and SP1.Further, the CheA3 TF analysis reveals that these two TFs from the cliques are present among the top 20 integrated mean rank TF analysis (Table S3).The third clique is comprised of two mRNAs and one miRNA.
Expression analysis of mRNAs and lncRNAs
Subclass-based expression analysis.The expression analysis of mRNAs and miRNAs was performed using the UALCAN tool (Table 1).The BC subtype-based expression analysis reveals GJA1 is the most expressed mRNAs and TGFB1 and EGFR has similar expression levels ( www.nature.com/scientificreports/Promoter-methylation based expression.The promoter methylation status analyses of the mRNAs revealed that (Table 1) MMP2, CXCL12, MAPK1, ICAM1 and BIRC5 are significantly methylated in Luminal A subtype of BC.GJA1, ENAH, EGFR, TGFB1and YAP1 are highly methylated in HER2 + BC subtype.STAT3 is the only gene found to be most methylated in triple-negative breast cancer (TNBC).
Expression analysis of miRNAs
Expression levels of miRNAs (Fig. 5a,b) reveals that miR-105-2 and miR137 are the most significantly expressed miRNAs, while miR-204 and miR133b expression levels are reduced in BC (Fig S6).
Survival Analysis of DEGs
Survival Analysis using Oncolnc in Table 2 lists significant survival biomarkers with log rank p values of less than 0.05.Further, p values from the PrognoScan tool analysis and Fp class scores validates the survival status of the top hub mRNAs.
Validation of survival biomarkers was done by comparing the p values of the hub mRNAs from PrognoScan and Oncolnc (Table 3).This was done to finalize the most probable survival biomarkers that can be targeted as a therapeutic alternative to manage BC.To each hub gene, a gene expression score and a network's topological score were generated.The co-expressed partner gene with the common survival marker TGFB1 (hub gene) was identified as FURIN.
Mutation-related data for DEGs
From cBioPortal, the types of mutation for each hub mRNAs are obtained.Circos plot (Fig. 7) represents missense, frameshift deletions and splicing in different chromosomal positions of the hub genes.
The circular two-dimensional graphical representations known as "circos plots" offer a thorough method for presenting and understanding multi-dimensional genetic data.The different versions of the same genes are mentioned as gene_1, gene_2 and so on depending on the number of different chromosomal positions at which the mutations occur.ENAH is the only mRNA with missense (MS), Splice sites (SP) and frameshift deletion (FD) while BIRC5 has FD AND missense mutations.No data was found for CXCL12.The Table S4 lists all the mutations with respective chromosomal positions for the 11 hub DEGs.
Discussion
Conventionally, an individual signaling pathway or a dysregulated proteins is targeted for devising therapies.In this work, DEGs involved in BC are considered that mediate the functioning of various ncRNAs involved in the disease development.Hence, we have utilised network-based approaches to identify specific cliques involving the interaction of DE mRNAs, miRNAs, lncRNAs and drugs/small molecules that can be considered for devising newer therapeutic strategies as they give us an overall idea of the disease environment under the influence of interacting RNAs and drugs.The ncRNAs are often not extensively discussed or considered with respect www.nature.com/scientificreports/ to disease causing phenomena and in some of the studies they are considered as separate identities, when the underlying mechanism of disease involves the interaction of mRNAs and ncRNAs with each other.For instance, when no drug is provided into a system, dysregulated levels of ncRNAs can modulate the levels of coding RNAs or expression of protein levels leading to a diseased state.Through this network-based approach we are trying to understand the crosstalk between mRNAs and ncRNAs that can therapeutically be addressed in future considering the entire disease environment.Differential expression of genes helps in the developing in-depth biological insight into the underlying mechanism of a disease condition owing to its interaction with various ncRNAs and drugs that also play a major role in modulating the overall scenario underpinning any diseased state.
According to the in-silico study conducted, 11 out of 113 differentially expressed metastatic genes undergo different types of mutations, primarily missense, frameshift and splicing that lead to the diseased state.Regulation of expression of different DEGs in turn influence the expression levels of miRNAs, lncRNAs and different molecules (small molecules/drugs).Among the mRNAs from the top 10 cliques generated, BIRC5 was one of the most significant with more than 10% of probable genetic alteration.BIRC5 expression is associated with resistance to chemotherapy, radiation and neo-adjuvant therapy, specifically in II/III stage of BC 40 .Also, with age of the patients the expression of BIRC5 increases and hence targeting it aids in better survival in BC patients.Significant amount of DNA methylation is observed in BC tissues.Also, from the DNA promoter methylation status of BIRC5 reveals its negative association with the methylation status of the gene.Survivin coded by BIRC5 is responsible for cell division bypassing the cell death in normal as well as cancer cells leading to decreases survival of cancer patients.
However, the validated survival markers include TGFB1, GJA1 and ICAM1.The role of TGFB1 is multifaceted in BC depending on the stage of cancer 41 .This cytokine exhibits tumor-suppressing properties in the initial stages of BC by hindering epithelial cell cycle progression and enhancing apoptosis.TGFB1 is associated with tumor development, higher cell motility, cancer invasiveness, and metastasis in late stages, however.Additionally, it promotes the EMT and modifies the cancer microenvironment.However, there are three known therapeutic strategies involving TGFB1.On order to disrupt ligand-receptor interactions, soluble receptors and anti-receptor monoclonal antibodies are used along with TGFB1 receptor kinase inhibitors and peptide aptamers that suppress intracellular signalling cascades by using antisense compounds to impede TGFB1 production on the ligand level [42][43][44][45] .GJA1 is subtype-dependent mRNA that codes for the protein Connexin-43 (Cx43).Cx43 is overexpressed in ERα-or PR-positive breast tumors compared to ERα-or PR-negative tumors 46 .The expression of ICAM1 is directly proportional to the metastatic potential of the tumor cells.Reduced invasion of human epithelial BC cells in vitro is done by targeting endogenous human ICAM1 46 .
The significant DEGs are not only identified by exploiting multiple correction algorithms but are also backed up by existing literature review regarding their relevant role in BC.Hence, the results are consistent and comparable (Table 4) as per literature provided regarding the role of these DEGs in BC.
ZFAS1 is generally known to be a tumor suppressor lncRNA.Overexpressed levels of ZFAS1 are associated with decreased tumor cell proliferation leading to apoptosis of BC cells.In addition to this, significant expression levels of ZFAS1 leads to decreased metastasis by regulating EMT 56 .ZFAS1 binds with the CDK1/cyclin B1 complex leading to destabilized state of p53, which promotes cell cycle advancement and inhibits apoptosis 57 .The expression levels of ZFAS1 is in human BCs is much less compared to its levels in normal native tissues 58 .Upregulated levels of ZFAS1 in BC targets miR-589 by the PTEN/PI3K/AKT signal pathway modulation resulting www.nature.com/scientificreports/ in possible inhibition of proliferation, tissue invasion and metastasis of BC cells.The role of linc00205 that is the most shared lncRNA among the 636 cliques generated.The role of this lncRNA is not quite explored in BC.It is known to promote tumorigenesis and metastasis by competitively suppressing miRNA-26a in gastric cancer 59 .Additionally, it speeds up the growth of hepatoblastoma via controlling the microRNA-154-3p/Rho-associated kinase 1 axis through mitogen-activated protein kinase signalling.WDFY3-AS2 is found to be decreased in TNBC and hence serves as a potential prognostic factor in TNBC development 60 while its overexpression is associated with inhibition of tumor cell growth, cell migration and invasion 61 .Due to its inherent and extrinsic propensities to suppress carcinogenesis, the TF ETS1 may be an effective therapeutic target for BRCA 62 .SP1 interacts with the insulin-like growth factor I receptor to regulate BC proliferation.Additionally, SP1 promotes angiogenesis by binding to the VEGF promoter, creating a favorable environment for the development of tumors.
Among the top upregulated and downregulated miRNAs there are miR-105 and miR-137 and miR204 and miR934, respectively.miR-105 has an intricate function in the onset and propagation of cancer.Given the specific tumor setting and the pairing of bases in genes, miR-105 either functions as a tumor suppressor by preventing metastasis or as an oncogene by encouraging tumor initiation and tissue invasion 63 .Evidence suggests that miR-137 plays a function in tumor suppression via altering Del-1 expression in TNBC 64 .In breast tissues, miR-204-5p was dramatically downregulated, and patients with BC who expressed more of it had better survival rates 65 .miR-934 mediated regulation of PTEN and EMT results to BC metastasis 66 .
When it comes to the small molecules or drugs found in the top 10 cliques, it is seen that they are approved drugs that are clinically available to treat BC.The drugs from the top 10 significant cliques like doxorubicin, www.nature.com/scientificreports/dexamethasone are the conventional drugs used for BC treatment.As mentioned earlier, these drugs come with various side effects alongside drug resistance by BCCs.Owing to interaction with approved drugs, not only mRNAs, miRNAs and lncRNAs act as potential candidates but also the two TFs that are within the top 14 in ChEA3 analysis when targeted have the ability to treat BC.Targeting such ncRNAs and TFs can help in dosage modulations of such conventional drugs reducing adverse effects and hence add a therapeutic value into the BC regimen.
Future prospects of the interaction study
The hERG channel activity is vital for normal cardiac functioning.Any drug-mediated hindrance in the channel activity leads to serious cardiotoxicity resulting to prolonged QT interval.Hence it is of utmost need to evaluate the role of drug molecules in modulating the channel activity.In a study 67 dealing with cardiotoxicity imparted by the hERG channel blockers, a robust deep learning (DL) model called DMFGAM is utilised.It is a fivefold experimentally cross validated model based on the molecular fingerprints and graph attention mechanism.This model serves as a significant tool to assess hERG channel blockers in initial phases of drug discovery and development.Similar to network-based approach, newer technologies are needed to be developed to understand the relationships among various bioentities involved in a disease.In a study by Sun et al. 68 , a novel DL algorithm named as 'graph convolutional network with graph attention network' (GCNAT) to predict the potential associations of disease-related metabolites.The graph convolutional neural network is used to encode and learn characteristics of metabolites and diseases.The encapsulations of several convolutional layers are then combined using a graph attention layer, and the associated attention coefficients are determined to give the embeddings of each layer various weights.The final synthetic embeddings are decoded and scored in order to achieve the prediction result.Finally, GCNAT surpasses the outcomes of the current five state-of-the-art predicting algorithms in fivefold cross-validation, achieving a dependable area under the receiver operating characteristic curve of 0.95 and a precision-recall curve of 0.405.
GCNCRF 69 is a technique for predicting human lncRNA-miRNA interactions that is based on the graph convolutional neural (GCN) network and conditional random field (CRF).Using the LncRNASNP2 database's known lncRNA and miRNA interactions, the lncRNA/miRNA integration similarity network, and the lncRNA/ miRNA feature matrix, we first build an eclectic network.Second, a GCN network is used to obtain the first embedding of nodes.The generated initial embeddings can be updated by a CRF set in the GCN hidden layer to ensure that related nodes have similar embeddings.The decoding layer is then used to decode and score the final embedding.GCNCRF achieved a fivefold cross-validation experiment area under the receiver operating characteristic curve value of 0.947 on the primary dataset, outperforming the other six cutting-edge approaches in terms of prediction accuracy.
In a study by Xu et al., it was investigated how components are built in messenger RNAs mRNAs-driven protein droplets with respect to various physical features by developing a Cahn-Hilliard phase-field model coupled with Ginzburg-Landau free-energy scheme.It was observed that the growth rate of droplet size and the assembly of higher-order complexes in a droplet are severally determined by the diffusion rate of the droplet and the binding rate of mRNA with protein.This was done by analyzing the intra droplet hetero patterning of two specific droplets (mRNA-and mRNA-driven droplets).A phase-field model based on the Cahn-Hilliard diffuse interface model to investigate how mRNAs regulate protein phase separation.Whi3 protein is combined with a particular kind of mRNA, which can bind to Whi3 through RRM to create complexes 70 .
In a work by Xiang Li et al. 71 , they assessed exact quantities of up to hundreds of proteins involved in the dynamic assembly and disassembly of TNF signaling complexes using the SWATH-MS approach.When we combined experimental validation with SWATH-MS-based network modelling, we discovered that the cell only experiences TRADD-dependent apoptosis when RIP1 levels are below 1000 molecules/cell (mpc).There is a biphasic relationship between the amount of RIP1 and the occurrence of necroptosis or total cell death.In order to allow RIP1 to play a variety of roles in controlling cell fate decisions, our study offers a resource for encoding the complexity of TNF signalling as well as a quantitative description of how different dynamic interactions between proteins serve as basis sets in signalling complexes.
Recent studies show that inflammasome-activated caspase-3 can trigger secondary necrosis/pyroptosis, which releases fewer inflammatory cytokines and reduces the occurrence of severe immune diseases.GSDME can prevent tumor growth by enhancing cell antitumor function.However, GSDME-induced secondary pyroptosis appears to be minimal in GSDMD-or caspase-1-deficient RAW-asc cells.Further analysis using cells with high GSDME expression, such as bone marrow-derived macrophages (BMDM), is needed to fully understand the role of secondary pyroptosis in these cells.Pyroptosis decreases cell death contribution, while apoptosis becomes important with reduced caspase-1 or GSDMD levels, with low caspase-1 thresholds 72 .
A study 73 presents a novel matrix factorization model called LMFNRLMI, which predicts lncRNA-miRNA interactions using known positive samples.The model outperforms other models in leave-one-out and fivefold cross validation, improving performance and confirming its superiority.The model aims to be a useful tool for identifying potential lncRNA-miRNA association identification in the future.
Deep Parametric Inference (DPI) is a powerful single-cell multimodal analysis framework that transforms multimodal data into a multimodal parameter space.It can characterize cell heterogeneity more comprehensively than individual modalities and has superior performance compared to state-of-the-art methods.DPI successfully analyzes COVID-19 disease progression in peripheral blood mononuclear cells and proposes a cell state vector field for bone marrow cell states 74 .
A study by Li Zhang et al. 75 developed a network distance analysis model for predicting lncRNA-miRNA associations (NDALMA) using Gaussian interaction profile (GIP) kernel similarity.The model achieved satisfactory results in fivefold cross validation.NDALMA showed superior prediction performance compared to other network algorithms.Case studies confirmed its reliability in predicting lncRNA-miRNA associations.
Gene function and protein association (GFPA) 76 is a new analysis framework that mines reliable associations between gene function and cell surface protein from single-cell multimodal data.It reveals cellular heterogeneity at the protein level, demonstrating its reliability across multiple cell subtypes and PBMC samples.
Conclusion
Clique identification from a complex network of differentially expressed metastatic targets, specifically ncRNAs involved in BC can be explored as the potential biomarkers for the BC.As per ChEA3 analysis, EST1 and SP1 are among the top 14 TFs.As per the BRCA dataset.This study gives an integrated environment scenario of the interaction of the RNAs and their roles in BC metastasis.The BC subtype-based expression analysis reveals TGFB1 and GJA1 are two most expressed mRNAs and can be explored further with respect to its modulatory effects on metastasis and BC stemness.The three validated significant survival markers are TGFB1, GJA1 and ICAM1 having frameshift and missense mutations primarily and hence that can be targeted to aim better overall survival in patients.BIRC5 is one of the key regulators as per the network analysis with significant genetic alteration.This gene is hypermethylated in luminal A and HER2 + ve subtypes.miR-105 and miR-204 are oncomiRs while miR-137 and miR-934 are tumor suppressor miRs.Among the lncRNAs, ZFAS1 and WDFY3-AS1 are tumor suppressor lncRNAs while lnc00205 is an oncogenic miRNA.As tabulated in Table 4, similar researches have been done involving interaction network of different RNAs, but every study lacks one of the other aspects.In this study, the integration is done involving coding as well as ncRNAs along with small molecules/drugs.In addition to various analyses performed, a plot depicting the mutations in BC metastasis for the hub genes is also given in this work.The network-based approach to identify cliques from the complex network utilizes interaction among competitive endogenous RNAs (ceRNAs) to devise a newer therapeutic strategy to treat BC.
enables enrichment analysis of the gene list based on genome-wide experiments.This study utilizes this open-access tool to perform pathway analysis (BioPlanet 2019, KEGG 2021 Human, Elsevier Pathway Collection and Reactome 2022) and ontology analysis of the DEGs categorized into three semantics: Biological Processes (BP), Molecular function (MP) and Cellular Component (CC).
Figure 1 .
Figure 1.The integrated methodology of the study exploring the role of coding and non-coding RNAs, small molecules and transcription factors involved in breast cancer metastasis.
Figure 2 .
Figure 2. Overall correlation network for the 11 hub mRNAs.The orange lines represent positive correlation and blue lines represent negative correlation of genes.The central nodes are the 11 hub DEGs and the surrounding nodes are correlated genes forming clusters.
Fig S4) in BC subtypes while the hub lncRNAs (NEAT1 and MALAT1) (Fig S5) are almost similarly expressed in all BC subtypes.XIST less expressed in luminal than the other lncRNAs.
Figure 4 .
Figure 4. (a) Based on the MCODE plugin score of cytoscape, the top 2 sub-networks are retrieved: Cluster 1 with a score of 5.438 and Cluster 2 with a score of 5.417.(b) The merged network for mRNAs-miRNA-TF (c) Cliques generated involving mRNAs-miRNAs-TFs reveal two significant TFs: SP1 and ETS1.
Figure 8 .
Figure 8.(a) Pathways enriched by the 113 DEGs as per REACTOME, KEGG, Elsevier and WiKi datasets.(b) Ontology enrichment (cellular components, molecular function and biological processes) of the 113 DEGs.
Table 1 .
Subclass-based expression analysis of hub RNAs and DNA promoter methylation status of mRNAs.
Table 2 .
Kaplan-Meier plot analysis of DEGs significant in survival of BC patient.
Table 3 .
Validation of top 20 hub genes: by utilizing the p value from PrognoScan and Oncolnc and Fp Class scores.
Table 4 .
Comparison of our integrated analysis with similar studies reported in the literature. | 6,577.2 | 2023-09-22T00:00:00.000 | [
"Medicine",
"Biology",
"Computer Science"
] |
A gis-based approach for manure spreading monitoring within 2 the digital agriculture framework
Agronomy
Introduction
The agricultural sector has a strong impact on emissions of pollutants, including 35 ammonia from livestock farming and fertilisation of agricultural land.
At the European level farmers will be permitted to spread manure in specific 37 temporal windows [1] although monitoring every plot of land is a critical point [2].38 The approach of integrating Earth Observation acquisitions, ground data and 39 available datasets bring benefits to deal with issue [2].Within this perspective, the 40 objective of the activity is to develop a tool to support monitoring of manure spreading.
Biol.Life Sci.Forum 2022, 2, x. https://doi.org/10.3390/xxxxxwww.mdpi.com/journal/blsf The study area identified to develop and demonstrate tool capacity is the Po Plain (Fig. 1), one of the regions most involved in agricultural activity in Italy.The tool was developed using the open source software QGIS.
A number of surveys were conducted in the north-east and in the east of the study area (province of Modena and Bologna) (Fig. 1) to collect Ground Truth Points (GTPs).The approach is based on a combination of the following set of relevant spatially explicit variables, combined using a weighted formula: • Variable 1: Manure spreading frequency; • Variable 2: Manure spreading areas manually detected; • Variable 3: Distance from farms and/or bio-digesters.
The data processing approach is shown in Figure 2.
Field campaigns
Field campaigns were carried out to collect a dataset of GTPs, used to validate the spreading and no-spreading areas (Fig. 3).Measurements of relative humidity, electrical conductivity and temperature were taken in the fields surveyed.They were carried out both where spillage had occurred and where it had not.
Satellite data processing
First, the satellite acquisitions were pre-processed by applying raster masks that isolate arable fields by integrating the following ancillary datasets: Soil Map [5] to circumscribe the lowlands; Land Use [5] and Land Consumption [6] to exclude urban and non-agricultural; Crop Map [7] to exclude grassland and alfalfa.
Then, spectral analysis was carried out in order to investigate manure spreading response.Average reflectance values for each spectral band acquired by the satellite sensor were extracted and analysed, in particular for spread areas.Variations in the ShortWaveInfra-Red (SWIR) region of the electromagnetic spectrum (corresponding to bands B11 and B12 of S2 MSI data) have been considered to be more suited in manure spreading detection, according to scientific literature [2,8].
Finally, a spectral index calculated from the combination of SWIR, near-infrared (NIR) and red spectral bands of S2 MSI was employed to map manure spread regions in each satellite acquisition: The separation between manure and other land cover has been reached with a threshold of 3, which represents the optimal performance according to the overlapping rate test for the pure pixel detection.Furthermore, the pixel aggregation procedure has removed the noise.These steps constitute a semi-automatic processing that provides the raw dataset utilised for the development of two different independent products:
Variable 1: Manure spreading frequency
The analysis of the preliminary output of the satellite data processing has shown that some areas were more often involved by manure spreading events; thus, the spreading frequencies have been investigated.Firstly, the data redundancy has been mitigated, realising a 350 m buffer for each candidate "manure spread" area.Later, each buffer was assigned the value 1, while the background was set to value 0.Then, manure spread rates have been obtained by summing up the manure spread maps generated from each satellite acquisition.Ultimately, the obtained values were converted into 5 classes, each of which was assigned a susceptibility value (as reported in Table 2).
Manure spreading frequencies lower than 5 have been excluded, since they could represent a low impact fertilisation practice.
Table 2. Susceptibility values depending on the automatic frequency computation.
Variable 2: Manure spreading manual identification areas
Using photo-interpretation, an expert operator confirmed manure spreading areas identified by semi-automatic processing, in order to refine the product by removing false positives.The resulting product consists in a spatially explicit variable in which manure spreads regions are identified by a 350 m radius buffer.Buffer operation has enabled the aggregation of contiguous manure spreads areas.Buffer areas and background pixels have been assigned the values 1 and 0, respectively.
Variable 3: Distance from farms and/or bio-digesters
Spray fields are typically located in close proximity to animal houses and manure lagoons due to the high cost of hauling the liquid [9][10][11][12][13].Positions of livestock farms and biodigester (main manure storage points in Po Plain) were identified using information on the characteristics of farms and biodigesters in the Emilia-Romagna region found in regional datasets [14].Only cattle farms with more than 80 animals/farm and pig farms with more than 600 animals/farm were selected.
A spatial analysis was then performed to assign each pixel in the study area a distance value from farms and biodigesters.
The last step was to reclassify the distance values into discrete classes by assigning each ring a susceptibility value between 0.2 and 1, as reported in Table 3.The assigned values were agreed upon with the stakeholders.
Integration of variables and tool calibration
In order to prioritise and integrate the three variables, a set of weights, agreed upon with the primary users, were assigned to combine variables according to formula: Susceptibility = (0.4 * Variable 1) + (0.2 * Variable 2) + (0.4 * Variable 3). ( Within the variables integration process, statistical indicators have been evaluated.
Among the coefficient combinations following a normal distribution, the one that better shows the balance between the three variables has been selected.Thus, very high susceptibility rates are detected uniquely in the areas where the three variables reach a maximum value.
Results
The product of the formula ( 2) is the susceptibility tool (Fig. 4), which ranges 0 to 1.
Discussion and Conclusion
Regarding the parameters measured in field campaigns, no evidence was found between the spreading and the non-spreading areas.However surveys have been essential in evaluating the correspondence between the satellite and the ground data.
New field campaigns could be carried out together with soil chemical analyses to better investigate any differences and to enhance the accuracy of the monitoring of the area.In this study the accuracy of the manure spectral index is calculated on test areas and will be implemented simultaneously with the aforementioned field campaigns.
The tool could be tailor-made to users' needs for different geographic areas modifying the weights used for each variable in the formula (2).In addition, the manure spreading frequency could be provided annually, estimated from satellite time series analysis updated.
Figure 1 .
Figure 1.Study area in Po Plain (in red).Surveys carried out in the green-marked areas.
Figure 2 .
Figure 2. Workflow of the methodology for the GIS-based tool development (Susceptibility map).
Figure 3 .
Figure 3. Field campaign measures and S2 MSI true colour acquisition over the surveyed site.Field campaigns were performed in the temporal window from October to March: October manure spreading for winter crops, November -February fertilisation ban, March soil/land prepared for new crops.
Figure 4 .
Figure 4. Susceptibility tool, zoom on the north-east portion of the Modena province.The values were converted into 5 classes (from very low to very high) to enhance the readability (Tab.4).
Table 1 .
. The processing operations, for both S2 and PRISMA acquisitions, have been executed using the software ENVI version 5.6.Technical specification of satellite data[3][4].
Table 3 .
Spreading susceptibility values assigned for distance from livestock farms and biodigesters.
Table 4 .
Classes of susceptibility according to values obtained. | 1,730.4 | 2023-11-02T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science",
"Environmental Science",
"Geography"
] |
ANALYSIS OF BASKETBALL GAME STATES AND TRANSITION PROBABILITIES USING THE MARKOV CHAINS
The abstract system of a basketball game has been established in the paper. Parts of the game are marked with the common characteristics; they are repeated, therefore, they can be denoted with the category: game states. The presented model enables the recognition and analysis of interaction between the set of system states. The discretization of the continuous course or fl ow of a basketball game and the de fi nition of equivalence among game states have given the prerequisites for the determination of transition probabilities between system states. Discrete stochastic processes and Markov chains were used for events modeling and transition probabilities calculation between the states. The matrix of transition probabilities has been structured between particular states of the Markov chain. The proposed model differentiates game states within four phases of game fl ow and enables the prediction of the future states.
ciations of offensive and defensive tactical intentions (Hernandez, 1987;Trninić, 1995).It also gives opportunity for mathematical formalization and for the analysis of complex interactions within the system of the game of basketball (Trninić, Perica, & Pavičić, 1994).Lapham and Bartlett (1995), McGarry et al. (2002) and Lees (2002) consider research into the complex interactions that occur in sport competition very important and propose further explorations of them.
The processes of cooperation and opposition emphasize the cognitive component of the game.They are treated as the process of interaction in which team-mates cooperate while performing innumerable Igor Jelaska 1 Slavko Trninić 1 796.323.2.052.244Ante Perica 2 Original scientifi c paper 1 University of Split, Faculty of Kinesiology 2 KK "ECE Bulls" Kapfenberg, Austria
INTRODUCTION
Worldwide popular team sport game with the ball known as basketball has its specifi c structural and functional characteristics which separate it from the other team sports, although they all belong to the same tree of ball sports games, like soccer, team handball, hockey, rugby, water-polo, the core of their nature being simultaneous existence of relationships of cooperation and opposition within the system of the game (Trninić, 1995;Trninić et al., 2010aTrninić et al., , 2010b)).Functional approach to the analysis of basketball game enables the decomposition of the system of interaction and interdependence of parameters within the structure of the game as well as the functional analysis of relationships and asso-tasks/jobs in the game.Simultaneously, players of the two intra cooperating systems on the court belong to the two opposing teams which employ the results of their cooperation to resist and outplay each other and win over each other (Trninić, 1995).
Therefore, basketball game must be observed as a complex phenomenon.In top-level sport that complexity, depending on the observer's standpoint, may be highlighted from the perspective of: players, expert coaches, managers, or scientists (Trninić, Jelaska, & Papić, 2009a).In this paper basketball is observed as a set of knowledge composed of several layers; it is called the body of basketball knowledge (Trninić, 1995;Trninić, Trninić, & Jelaska, 2010).In a competition, during a match, practical and conceptual knowledge of individual players and of the whole team is constantly scrutinized.
Basketball game can be observed, from the expert coaches and players' point of view, as the arranged, ordered sequence of game tasks each and every player must carry out relative to his/her playing position and role within a particular model of play tactics (Trninić, 1995;Trninić, et al., 2010aTrninić, et al., , 2010b)).The realization of the game tasks implies successful application of individual technical-tactical knowledge and skills on the court in the actual game situations.Successful application of individual technical skills and tactics is not possible if not coordinated with team-mates' individual technical skills and tactics through the collective team tactics the aim of which is accomplishment of individual and common goals, that is, to win a match by counter playing the same intentions of the opposing team and players.Trninić (1995) explains that game tasks individually classify motor activities and motor behavior of particular players in respect to playing position and the role within a team and a model of play tactics.That primarily regards basketball specifi c anthropological demands including: cognitive-motor (technical-tactical), energy-related (intensity of play) and socio-motor components of activities performed on the court i.e. coordination and opposition (Trninić, 1995).All the three components are manifested in solving and realizing particular game situations and in the fl ow of actions within game phases and models of play tactics, then in the intrinsic and extrinsic loads in training and competition activities as well as in the constellation of relevant sport-specifi c characteristics and state of players, being responsible for successful realization of particular playing positionspecifi c tasks in a game.
Trninić, Perica and Pavičić (1994) as well as Perica, (2011) and Jelaska (2011) described mathematically the system "basketball game".The achievement of that model is the recognition of the two basic system states -position and transition, from the aspect of the kinematic description.These states are (1) position, or in the vocabulary of basketball practice the positional/set offense and the positional/set defense, and (2) conversion -transition (Knight, 1994), the state of transmuting defense into offense, and vice versa.Conversion -transition are interpreted as a switch, a connection between the phases of defense and offense.The tasks players of both (opposed) teams carry out on the court on particular playing positions, in relation to the position of the ball as the centre of communication, structure and generate various game states.
SYSTEM MODELLING
During the game the players recognize and anticipate events in play and, in accord with that, they make selective decisions and react (Trninić, 1995;Trninić, et al., 2010aTrninić, et al., , 2010b)).The system in basketball can be understood as a set of all participants in a match.In consideration the limitation can be put on just the ball and the players.However, in the context of actual competition one must include also referees, coaches and other offi cials, bench players, and even spectators (Trninić, Perica, & Pavičić, 1994).Further, in the context of structural analysis of knowledge in the game of basketball the category of game states has been established using kinematic description of the game.The position of game states in the hierarchical structure of basketball tree has been defi ned in the space between game tasks (bellow) and play tactics (above) (Trninić, 1995(Trninić, , 2006)).
Previous research using the Markov chains in sports
Team sports games are multilayer and complex sports activities in which a symbiosis of abundant cyclic and acyclic movements with the ball and without it can be seen (Trninić, 1995).They are de-termined by the relationships of cooperation between team-mates and of opposition between the opposing players and teams.A basketball match has its continuous course or game fl ow.It can be presented as an arranged sequence of tasks which, when realized on the court, generate game states (Trninić, Perica, & Pavičić, 1994).Within our model of basketball game states analysis it is assumed that the game fl ow has been discretized into defi nitely many moments or time parts.Also, the game fl ow has its three basic game states or phases: offense, defense and conversion (Jelaska, 2011;Knight & Newell, 1986;Perica, 2011).
Each phase of game fl ow has its specifi c characteristics conditioned by very specifi c and particularly defi ned goals within the complex collective tactical operation, which corroborate the notion that basketball game is high level tactical complexity sport.
The Markov chains have appeared in kinesiology/sports science for the fi rst time in 1977 applied in Bellman's paper.Norman (1999) made an overview of 17 scientifi c papers in which he analyzed the possibilities to use stochastic processes for modeling in kinesiology/sports sciences, whereas Clarke and Norman (1998) utilized stochastic techniques to investigate various decision-making processes in the game of cricket.Lees (2002) advocates for new methods, for example, the use of artifi cial neural networks, to be used in kinesiology for establishing characteristics of the whole (biomechanical) skills, instead of quantitative analysis because these new methods can be a useful tool to overcome limitations of classical statistical methods.
For example, Hirotsu and Wright (2003b) analyzed the game of baseball using the Markov chains.
They demonstrated how that approach might help to select optimal hitting strategies and how much the probability of winning increases if obtained strategy is followed.Also, the probability of winning in any state in the course of a game was calculated by using the Markov model -they solved the linear system of over one million simultaneous equations.
Bukiet, Harold and Palacios (1997) proposed an approach adoptable to be able to directly include the effects of pitching and defensive abilities.The approach can also be applied to fi nd optimal batting orders, run distributions per half inning and per game, and so on… Hirotsu and Wright (2003a) proposed, based on the actual data, a statistical model of an American football match that could be useful in providing deeper insights into team characteristics.
Lames used the fi nite Markov chains as a model for game sports, including its calculus (Kemeny, & Snell, 1976).Simulations were undertaken to assess the usefulness of certain tactical behaviors, as well as to assess the performance of individual players in team games.This idea was applied to tennis (Lames, 1988;1991) (Zhang, 2003) and team handball (Pfeiffer, 2003).Shirley (2007) indicates the states of the Markov chain are defi ned in terms of three factors: 1. which team has the ball possession (2 factors): Home or Away (Host or Guest); 2. How that team gained the ball possession (5 factors): Inbound pass, Steal, Offensive Rebound, Defensive Rebound, Free Throws; 3. the number of points that were scored on the previous possession (4 factors): 0, 1, 2, or 3. the largest possible model would have 2 5 4 = 40 states, but since certain combinations of the 3 factors are impossible, the largest model has 30 states.Also, making certain assumptions about the course of action in a basketball game can further reduce the number of game states.If one assumes, for example, that rare events like 4-point plays or loose ball fouls following missed free throws are impossible, then certain states can be eliminated without seriously affecting the usefulness of the model.The proposed model can provide a very detailed "micro simulation" of a basketball game.Quantities of interest can be computed via simulation.Some examples of these might be: 1. in-game win probabilities for a given team; 2. the expected number of points scored in a possession gained in different ways, such as offensive rebounds vs. defensive rebounds; and 3. the change in win probability as a function of the number of the ball possessions in a game; i.e. how useful a strategy is to "slow down the game?".
Basketball game modeling using the Markov chain
Within the context of the mentioned defi nition of basketball game states (Jelaska, 2011;Perica, 2011;Trninić, Perica, & Pavičić, 1994), it is obvious there are infi nitely many different states of the game.Such defi nition of the basketball game states, although formal and scientifi c, is not practical to be submitted to the Markov chains analysis.Therefore, we have to get defi nite number of game states.It would be done by equivalency analysis.We defi ne that two states are equivalent if they are alike in terms of space-time relationship.The feature of transitivity should be emphasized here, that is, if A and B are equivalent states and if C and D are also equivalent states, then A and C will be also equivalent states.Now the state of the Markov chain can be defi ned as the set of the entire states equivalent to a certain state.
Apparently, a single state of the Markov chain consists of infi nitely many interequivalent states, as well as a particular game state can be found only in one state of the Markov chain.A single state of the Markov chain occurs in the interval t t t i i , where t is selected empirically, so that our consid- eration would have a practical purpose.
From the Markov chains' aspect, the momentary state of the Markov chain has all the information needed for the decision-making about the selection of its immediately following state, that is, for the calculation of transition probability into the future state.It is an adequate model for our approach to the analysis of basketball game states, since any action in the state of position/transition in the moment t is a consequence, result of previous game states, therefore, no any additional information is needed for the determination of transition probability into the future game state.
The assumption is that the realization of tasks of individual players within a particular play tactics model generates the total of 1.We introduce states in four phases of game fl ow: • positional/set defense (n 1 states) -the fi rst set of states • transition defense (n 2 states) -the second set of states • transition offense (n 3 states) -the third set of states • positional/set offense (n 4 states) -the fourth set of states 2. The states with the unsuccessful outcome (n 5 states) -the fi fth set of states 3. The states with the successful outcome (n 6 states) -the sixth set of states 4. The state of a starting jump for the ball (one state) -the seventh set of states.
The standard assumption is that probability is for the transition between the states of the Markov chains being independent of the moment t for all the game states i,j and for all moments t.The equation probability of the transition into the state j in the moment (t+1) if in the previous moment t the chain has been in the state i.We will use the following notation the state of positional/set offense.The states from the other sets of game states are noted accordingly.
A particular phase of a game can be divided into the initial, intermediate and fi nal sub phases (Perica, 2011;Jelaska, 2011), where it is possible to have I i initial states, M i intermediate states and F i fi nal states, and where the following is valid I i +M i +F i =n i .
The matrix containing transition probabilities for the phases of positional/set and transition offense, P i,i i=3,4 (a diagonal elements of the block matrix P) will be a squared matrix with n i rows and n i columns of the following shape: Further, we assume that in the initial moment, t = 0, the starting jump for the ball is actively recognizable.That is, if with k we denote the state of the starting jump for the ball, then it would be valid that: Consequently, the Markov chain transition matrix is a block matrix P of the order N represented by: 7 ,..., 1 , , j i j i P P (2) where P i,j , i,j=1,...,7 are matrices of transition probabilities from the n i state of the i set states into the n j state of the j set of states.In it, the single block matrix P i,j is the dimension As, for example, the following is true 0 then in the proposed model it is not possible to transmute directly from the state of set/positional defense into the state of transition offense.That is because to transit from the set defense into the transition offense the decisive state of either success or failure of any previous play action should occur and, secondly, because it is not possible to transit directly from the transition offense into the set defense.Out of this follows that P 1,3 = P 3,1 = 0 (null matrix).Similar is valid for the transition from the set offense into the transition defense, and vice versa.Further analysis of game structure discloses the matrix P is the block matrix of the following form: The elements of the matrix P are interpreted in the following way: 1. Matrices P i,5 , P i,6 and P i,7 for i=1,...,4 are performance indicators within a particular game phase, because they contain probabilities of entering the states of success/failure.2. P 2,1 and P 3,4 are the basketball game intensity (tempo) control indicators.In borderline instances, when blocks P 2,1 and P 3,4 converge to null matrix, that indicates high intensity and uncontrolled play.
The future experimental research studies should reveal preferred game models (combinatory) in every individual team as well as in the elite class of European basketball as a whole (Euroleague).In other words, the preferred "walks" along the Markov chain should be found within a particular phase and sub phase of game fl ow.
The process of in-season sports preparation and team play coordination improvement will probably change values of certain matrix elements, that is, transition probabilities will change.Effi ciency, or successfulness increase of the preferred combinatory is expected in all phases of game fl ow.
The fi rst block matrix of transition probabilities is the null matrix according to the defi nition of variables of the initial states of the positional/set and transition offense (Jelaska, 2011;Perica, 2011).
The matrix containing transition probabilities for the phase of positional/set and set defense P i,i i=1,2 will be a squared matrix with n i rows and n i columns of the following shape: The matrix has a structure in accord with the defi nition of variables of the positional/set and transition defense (Jelaska, 2011;Perica, 2011).
The upper right sub block of the matrix P i,i indicate again game intensity or tempo of play.When it tends to the null block matrix, it indicates the controlled game fl ow ("prolonged offense").
Our assumption is also that the process is stationary within the framework of one match, that is, particular probabilities do not change during a match.
CONCLUSIONS
Investigations, validation, and evaluation of events in the game of basketball are necessary for scientifi c foundation of applicative kinesiology in sport.In the present paper the concept of game states has been defi ned from the aspect of the Markov chains' application in the theory of complex sports activities.The continuous game fl ow has been discretized and explained as one characteristic sequence, order of game states.An abstract system of basketball game has been shaped in the paper as the theoretical foundation for its later detailed elaboration and verifi cation of its operation.The discretization of the continuous course or fl ow of a basketball game and the defi nition of equivalence among game states have given the prerequisites for the determination of transition probabilities between system states.A mathematical model, the Markov chain and discrete stochastic process have been used to describe the interaction among the system states.The matrix of transition probabilities has been structured between particular states of the Markov chain.The proposed model differentiates game states within four phases of game fl ow and enables the prediction of the future states.The application of the model in kinesiology/ sport science research studies will allow the recognition of prevailing game tendencies in the European elite basketball, thus enabling, through the calculated parameters, the recognition whether there are any tendencies to control game intensity, what are the levels of tactical combinatory, and what are the preferred tactical models of particular teams.
The future research guidelines should establish and explain the relationships and connections between particular game states and game phases, and the determination of numerical values in the proposed operational model.So, the analysis of transition probabilities matrix values should probably be a considerable contribution to the situational approach in the empirical verifi cation of basketball regularities and balanced game principle.
From the aspect of sports games theory, the continuation of research is necessary on intrinsic set of information (internal states of players and the demands specifi c for a particular type of players), game states, game fl ow and balanced game as components of the system "basketball game".The fi nal, eventual aim should be the determination of relations and correlations between kinematic parameters of game states and intrinsic parameters of balanced game, as well as the construction of a mathematical model which will embrace these associations also.
the four basic groups of game states: | 4,639.6 | 2012-01-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
COMPARISON OF THE INFLUENCE OF STANDARDIZATION AND NORMALIZATION OF DATA ON THE EFFECTIVENESS OF SPONGY TISSUE TEXTURE CLASSIFICATION
. The aim of this article was to compare the influence of the data pre-processing methods – normalization and standardization – on the results of the classification of spongy tissue images. Four hundred CT images of the spine (L1 vertebra) were used for the analysis. The images were obtained from fifty healthy patients and fifty patients with diagnosed with osteoporosis. The samples of tissue (50×50 pixels) were subjected to a texture analysis to obtain descriptors of features based on a histogram of grey levels, gradient, run length matrix, co-occurrence matrix, autoregressive model and wavelet transform. The obtained results were set in the importance ranking (from the most important to the least important), and the first fifty features were used for further experiments. These data were normalized and standardized and then classified using five different methods: naive Bayes classifier, support vector machine, multilayer perceptrons, random forest and classification via regression. The best results were obtained for standardized data and classified by using multilayer perceptrons. This algorithm allowed for obtaining high accuracy of classification at the level of 94.25%.
Introduction
Continuous technical development entails the development of medicine, which increases the effectiveness of diagnosing many diseases. Currently, medical imaging techniques, including computed tomography, play one of the main roles. The use of modern computed tomographs (CT) allows to obtain a monochrome image of the patient's body section in very good quality [1]. Depending on the needs, one can adjust the appropriate exposure parameters. Modern CT scanners are systems with intelligent X-ray dosing. During scanning, the system changes the lamp current parameters and significantly reduces them after passing through the areas requiring higher values (such as pelvis, shoulder girdle) [17]. The image from the CT scanner consists of the so-called voxels, which are the threedimensional equivalent of pixels in a two-dimensional image. In the images of individual layers of the examined organ, each pixel has its value determined in Hounsfield units, which correspond to the x-ray absorption coefficients [23].
The key issue in computer image processing is clearly identifying the areas of interest (ROI) [23]. The right choice of such an area increases the chances of obtaining diagnostically effective results. A valuable source of information about the condition of the tissue being examined is the texture of the image [2,3]. This property may include image granularity, pattern orientation, homogeneity, local contrast, or average brightness level of a given image area. On this basis, it is possible to distinguish two images from each other, as well as designate areas in a given image that meet certain conditions. The image texture can be symbolically described by providing the values for the finite feature vector [21]. In order to characterize the texture mathematically, a number of parameters calculated based on the properties of the digital image were introduced. In the literature, the following types of parameters can be found to describe the texture:statistical [8],structural [8],using signal processing techniques [12],morphological ( [11,15,17]). The high quality of CT images positively affects the possibilities of interpreting the texture of the areas of interest to us. This allows classification and segmentation, among others, in liver images [5,13], detection of lung diseases [24] and evaluation of the effectiveness of chemotherapy in rectal cancer, classification of brain tumors and gastrointestinal cancers [22].
The image texture analysis methods combined with appropriate pre-processing and classification algorithms have found wide application in the diagnosis of internal organ diseases imaged by various methods. Examples of such applications include the diagnosis of benign and malignant microcalcifications on mammographic images of the breast (X rays) [10], classification of atherosclerotic plaques in coronary arteries (endovascular ultrasound) [22], identification of malignant brain tumor types (magnetic resonance imaging) [25], detection of focal lesions in the liver (computed tomography) [6] and identification of Hashimoto's disease (intravascular ultrasound) [18,19].
Due to the high efficiency of the use of texture analysis in the tissue diagnostic process, an attempt was made to use this method in the detection of osteoporosis [7,16,20]. Osteoporosis is a skeletal disease which leads to bone fractures that can occur even after a minor injury. Most often they relate to the spine, but they can also occur in other locations. Excessive bone susceptibility to osteoporosis damage results from a decrease in bone mineral density and disturbances in its structure and quality. Osteoporosis is often asymptomatic. Only the fractures of the vertebral bodies often cause chronic back pain syndrome that prevents normal functioning [4]. Therefore, it is important to regularly monitor the condition of bone tissue. A standard procedure in the diagnosis of osteoporosis is densitometry, which is used to assess the bone mineral density. The test result is expressed by means of indicators comparing the bone density of the examined person with the bone density of young healthy persons (T-score) and peers (Z-score) [14]. However, this does not allow for accurately determining the area of the tissue in which the defects occur, which is possible in the case of analyzing the texture of specific areas.
The following article presents the use of spongy tissue texture analysis on the diagnosis of osteoporosis and the impact of data pre-processingnormalization and standardizationon the results of tissue classification.
Material
The CT scans of spine from a hundred patients were used to conduct the experiment. Each patient was examined on a GE 32-row tomograph in the standard L-S spine examination protocol. Fifty of them belonged to the control group, without diagnosis of osteoporosis and osteopenia. The same number of patients was also found in the group diagnosed as having osteoporosis. Four samples were obtained from each patient, and therefore four hundred spongy tissue images were used in the study. The samples representing the spongy tissue of the spine were selected from the image of the cross-section of the L1 vertebra in its central part. The sample size was adjusted to obtain the maximum possible tissue area. As a result of using this approach, 50×50 pixel samples were obtained.
Method
The tissue samples obtained from the images were subjected to texture analysis. As a result, 290 features described by specific numerical values were obtained. Due to the large divergence of numerical intervals and the need to compare them with each other, the pre-processing operations, i.e. normalization and data standardization, were performed. On the basis of the obtained results, five types of classifiers were built and their effectiveness evaluated by using five parameters commonly used in descriptions of medical experiments.
Texture analysis
Image analysis was carried out with the MaZda program (version 4.6) [26]. This program allows to analyse the grey cardboard images and determine the numerical values of image features. The set of features has been obtained on the basis of histogram (9 features: histogram's mean, histogram's variance, histogram's skewness, histogram's kurtosis, percentiles 1%, 10%, 50%, 90% and 99%), gradient (5 features: absolute gradient mean, absolute gradient variance, absolute gradient skewness, absolute gradient kurtosis, percentage of pixels with nonzero gradient), run length matrix (5 features × 4 various directions: run length nonuniformity, grey level nonuniformity, long run emphasis, short run emphasis, fraction of image in runs), co-occurrence matrix (11 features × 4 various directions × 5 between-pixels distances: angular second moment, contrast, correlation, sum of squares, inverse difference moment, sum average, sum variance, sum entropy, entropy, difference variance, difference entropy),
Distribution of significance of features
In the research, 290 features were obtained for each sample. Among them, the features with constant values for each sample were eliminated and 267 features remained after reduction. They are ranked in the ranking of the importance of features from the most to the least important. Fifty features occupying subsequent, initial places in the ranking were used for further experiments. .
Normalization
Data normalization is the scaling of original data (e.g. input data) to a small specific range. This method performs a linear transformation of the original data usually to the interval [0,1] according to the formula: where [min, max] is the range in which the input data falls, while [new_min, new_max] is the new range of data [9]. As a result of the transformation, the range of the first feature in the ranking (209) changed from 29.9 to 107, 35 to a range from 0 to 1.
Standardization
Standardization is the central preprocessing step in data mining, to standardize the values of features or attributes from different dynamic range into a specific range [9] Standardization is a type of normalization of a random variable, as a result of which the variable obtains the average expected value 0 and standard deviation 1 [9]. This operation is performed according to the Z test formula: where: observed variable value, expected value, average, standard deviation. One important restriction of the Z-score standardization is that it must be applied in global standardization [9]. As a result of standardization, the range of values for the first feature in the ranking (209) changed from 29.9 to 107, 35 to a range from -2.08 to 3.39. In order to assess the accuracy of classifiers, the following were used: general classification accuracy (ACC), true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV) and negative predictive value (NPV).
Results
The obtained classification results (presented in the tables below) allow to clearly determine the effectiveness of the classifiers depending on the type of pre-processing that were used. Support Vector Machine turned out to be the most effective classifier for the data after normalization. It obtained the highest values of all indicators used to assess the effectiveness of classifiers. The ACC value was 88.5%. Other TPR and PPV parameters reached 86.5%, TNR and NPV 90,5%. Similar results were achieved for the Naive Bayes classifier. They only differed in the ACC value, which in this case amounted to 88.25% and was 0,25% lower than in the case of Support Vector Machine. Random Forest turned out to be the least least effective classifier. The ACC value here was 86.25 and was 2.25% lower than the highest. Moreover, for the remaining parameters, Random Forest showed the lowest values. TPR and PPV obtained 84.5%, TNR and NPV 88.0%. Multilayer Perceptron turned out to be the best classification method for standardized data. The highest values for ACC = 94.25% as well as TPR and PPV equal to 95.5% were obtained for this classifier. For TNR and NPV, the values were 1% lower than for Support Vector Machine and Classification via Regression, which were equal to 94%. Naive Bayes turned out to be the least effective classifier, achieving an accurance of 87.25%. Other parameters were also the lowest among those obtained and amounted to 86.0% for TPR and PPV and 88.5% for TNR and NPV.
Conclusions
The results presented above clearly indicate the higher efficiency of classification of standardized data. The accuracy of results is 5.75% better, which is a significant difference in diagnostic tests. Moreover, other parameters are much higher in the case of the data after standardization -TPR and PPV by 9% and TNR and NPV by 3.5% from the highest values.
The above-mentioned results clearly indicate the best algorithm for data analysis which is their standardization and then the use of the Multilayer Perceptron classifier. The classification results obtained in this way allow obtaining a relatively high accuracy of 94.25%. In terms of medical diagnostics, the result is the basis for using this method in creating automatic image analysis systems. | 2,653 | 2019-09-26T00:00:00.000 | [
"Computer Science"
] |
Experimental Stress Intensity Factor Analysis of Mode Ⅱ using Strain Gauge and Tensile Shear Plate Specimen
In this study, analysis of the stress intensity factor ( K II ) in the in-plane shear mode was examined using a 2-axis orthogonal strain gauge. A specially shaped tensile shear plate specimen under shear control around the crack was prepared. The shearing strain near the crack was measured with this specimen and strain gauge, and the K II was calculated from the analytical equation. The K II by using the FEM analysis result and the extrapolation method was regarded as the theoretical value and it was compared with the experimental value. In addition, the effects of cracks length and distance between both cracks on the analysis accuracy of the K II were investigated. As a result, it was confirmed that the K II analysis in this study showed good accuracy except for some parts, suggesting the possibility of the proposed method.
Introduction
In recent years, many serious incidents caused by cracks in structures have been reported.In particular, the problem of aging social infrastructure such as tunnels and bridges has become apparent.In order to solve these problems, it is important to evaluate the risk of cracking and take appropriate measures to ensure safety.As a method for evaluating the risk of cracks, it is common to introduce the stress intensity factor (K), which is a fracture mechanics parameter.Therefore, researches (1)(2)(3) using strain gauges specialized for K value analysis and studies (4)(5)(6) on K value analysis which use commercially available strain gauges have been conducted for some time.In such previous studies, Kurosaki et al. (7)(8)(9) analyzed the K value for Mode II (that is, KII) using a 2-axis orthogonal strain gauge, and obtained certain results.However, the disadvantage of strain gauges is that they require a lot of work and time to use, and a simpler and labor-saving measurement method is desired in the field.In this regard, we have been studying a method for analyzing the KII value of the in-plane shear mode by using a strain checker that can save labor in strain measurement and a tensile shear plate specimen (11,12) .The strain checker (Tokyo Measuring Instruments Lab., Co., Ltd., FGMH-3A) has a built-in frictional strain gauge.Since it is attached to the measurement target by the magnetic force of the main body, it can be used repeatedly.In the previous studies (11,12) , the crack distance (shear length) of the tensile shear plate specimen was treated as constant, and the influence of the magnitude of the shear load around the crack has not been investigated.
Following the previous studies 11,12) , we attempted to analyze the KII value of the in-plane shear mode using a 2axis orthogonal strain gauge and five types of the tensile shear plate specimens with varying crack lengths and distances between cracks.The theoretical KII values were obtained using the FEM analysis results and the extrapolation method, and then it was compared with the KII value obtained from the experimental results.From the results above, the effects of crack lengths and distances between cracks on the analysis accuracy of KII values were clarified.
Strain Component for In-plane Shear Mode at Crack Tip
Figure 1 shows the case where the crack tip is in the inplane shear load mode.J. W. Dally et al. (13) represent the stress component and strain component at point P, dividing them into the regions shown in Figure 1 for the crack tip region under such a load condition.In other words, the stress and strain equations are expressed with the first region governed by the singular term of the stress intensity factor, the outer region as the second region, and the outer region as the third region.In this study, which targets strain gauges with a finite length, the strain components up to the first and second regions are targeted.Moreover, if the number of expansion terms of the strain component equation are taken large and expressed up to the third term, it becomes the following equation.
Since r 0 2 ⁄ in the second term on the right-hand side of the above equation is zero, it does not appear in each of •ℎ , •ℎ and •ℎ .Therefore, in equations ( 1)-( 3), the third term of the right-hand side r 1 2 ⁄ appears after the first term. 0 and 1 in these equations are unknown coefficients.
In each equation ( 1)-( 3), the coefficient 0 of the first term on the right-hand side is expressed by the following equation ( 4) including the shear mode KII.
The following are two equations in which θ = 0 is substituted into equation ( 3) and the shear strain component •ℎ is adopted up to the second and third terms.I.A case where up to the second term is adopted (Since r 0 2 ⁄ in the second term is zero, it is represented only by the first term).
II.When up to the third term is adopted (Since the second term is zero from the above, the number of terms is represented by two).
Analysis of Stress Intensity Factor KⅡ for
In-plane Shear Mode
Case of Analysis using Strain Gauge
Solving the above equation ( 5) for KII gives the following equation (7).
Equation ( 7) was used as an analytical equation by Kurosaki et al. (7)(8)(9) and Shimura et al. (11,12) in previous studies, and this study also deals with it.As shown in Figure 2, a 2-axis orthogonal strain gauge (Kyowa Electric Co., Ltd., KFGS-1-120-D16-11, Gauge length is 1 mm) is bonded to the extension line of the crack at an angle of ± 45 °.Then, the distance r from the crack tip becomes 1.3 mm.With this strain gauge, normal strain ε is measured twice (= 2ε), and that value can be directly treated as a shear strain •ℎ .By substituting the value of the shearing strain •ℎ and the distance r = 1.3 mm from the crack tip into equation ( 7), KII can be calculated.
Case of Analysis by Finite Element Method
Figure 3 shows the dimensions and shape of the tensile shear plate specimen used in the accuracy verification experiment described later.Finite element analysis (FEM) is performed assuming the experimental situation of this specimen, and KII is extrapolated using the element shearing strain of the analytical result by FEM near the crack tip.In this case, applying equation ( 6) considering up to the third term of the shearing strain component equation, and multiplying both sides of equation ( 6) by √2, the analysis equation becomes as follows.
When the left side of equation ( 8) is set to F and the coefficients other than r in the second term of the right side are rewritten as C �= √2 1 �, the equation is as follows.
The above-mentioned equation ( 9) is a linear function for r, and the intercept is the stress intensity factor.That is, as shown in Figure 4, KII can be obtained by extrapolating the F value on the left side with r.
As the boundary conditions of the FEM model, constraints other than the x, y, z-axis translational displacement for the left pin-joining circular hole and the xaxis translational displacement for the right circular hole are applied as shown in Figure 3.In addition, a load in the x-axis direction (maximum 2000N) was provided to the circular hole for pin-joining on the right side.The material property values are defined as 205 GPa for Young's modulus E and 0.3 for Poisson's ratio ν, assuming the use of cold-rolled steel SPCC.In the element division, the basic shape is a tetrahedral primary element, and a hexahedral primary element is adopted in the parts where the element strains around the crack end are acquired.A minimum element size of about 0.05 mm was used for the region where stress concentration is expected.
Accuracy Verification Experiment
Figure 3 shows the specimen used in this study.Previous study by Miyagawa et al. (10) have confirmed that the shape of this specimen is dominated by shearing stress between cracks in the central part.The specimen material is cold-rolled steel SPCC, and the red line in Figure 3 means the slits with a width of 0.32 mm due to wire cutting.These tips were regarded as the artificial cracks.The strain gauge was bonded to the crack tip as shown in Figure .2. Since it is a specimen shape with two cracks, it can be bonded to the opposite side.Furthermore, by using the back surface, it is possible to apply strain gauges at four locations with one specimen.This specimen was attached to a universal material testing machine (A&D Co. Ltd., TENSILON RTG-1310), and a tensile load was applied by displacement control of 0.001 mm/sec.The strains near the crack tips under the load were detected every 500N with the 2-axis orthogonal strain gauges and recorded on a data logger (Tokyo Measuring Instruments Lab., Co., Ltd., TDS-540).The strain measurement was performed three times for each specimen, and the arithmetic mean value of those strains was applied to the analytical equation (7) to calculate the KII Specimen No.
Results and Discussion
Figure 5 shows the results of strain measurement when the tensile loads were applied to the specimens (No.2, 4, 5) in increments of 500 N up to 2000 N.These results were obtained by fixing the crack length a at 15 mm and changing the distance between both cracks L to 10, 15, 20 mm.The abscissa is the loaded shear stress τ (load P / shearing area A, MPa).In either case, it can be seen that the output of the shearing strain •ℎ with respect to the loaded shearing stress τ has linearity.It can also be seen that the plot points of each symbol mean the loading level, and the shearing strain •ℎ increases as the ratio a / L of the crack length a to the distance between both cracks L increases.This is because when the shearing area A (distance L × specimen thickness t, mm 2 ) decreases, a large shearing stress τ occurs.In addition, when compared under the same loaded shearing stress τ, it can be confirmed that •ℎ increases as the ratio a / L decreases.
Figure 6 shows II e.calculated using the shearing strain •ℎ shown in Fig. 5 measured in the accuracy verification experiment and Eq. ( 7).In Fig. 6, II e.is taken on the abscissa.As the ratio a / L increases, the distance between both cracks L decreases, so the applied shear stress τ becomes the highest when a / L 1.5.Therefore, it can be seen that II e.shows the maximum value when the ratio a / L = 1.5 among the three types in Fig. 6.Furthermore, as described above, the shearing strain •ℎ increases as a / L decreases under the same loaded shearing stress τ, so it can be recognized that the II e.value increases due to the nature of analytical Eq. ( 7).
Figure 7 is a diagram for extrapolating the KII value from the F value calculated by Eq. ( 9) using the element shearing strain obtained from the FEM result assuming the specimen under tensile load.As in the experiment, the assumed shearing load has four stages, and four types are plotted in this figure.2 shows the II the. of all the remaining specimens obtained and organized by the same process.In this table, the loads applied to each specimen, the corresponding shearing stresses τ (upper column), and the theoretical values II the.(lower column, bold) are also shown.
Using the above results, the error rate e was calculated from the following equation to verify the analysis accuracy of the II e.obtained by the proposed method of this study.Figure 8 shows the relationship between the applied shearing stress τ and the error rate e for each specimen.Except for a / L = 0.5 (specimen No. 1, a = 10, L = 20), the error rate e is within ± 10 %, indicating that it has excellent analysis accuracy.According to Miyagawa et al. (10) , when the ratio of the half width w of the specimen to the shear length (distance between both cracks L) is 3 or more, shear failure becomes dominant.In the geometry used this time, the only specimen that matches the condition is No. 5 (a / L = 1.5, a = 15, L = 10) and w / L = 3.0 (w = 30, L = 10).In the case of targeting within the elastic region as in this study, it is considered that the condition does not necessarily have to be satisfied.The reason why the error rate becomes large a / L = 0.5 (specimen No. 1, a = 10, L = 20) is not clear at present.However, it is inferred that the magnitude relationship between the crack length a and the distance L has an effect on the II e.analysis accuracy.
Therefore, the relationship between the ratio a / L to the crack length a and the distance between both cracks L and the error rate e is summarized in Fig. 9.In Fig. 9, the ratio a / L is taken on the abscissa, and the error rates e for the four ratios a / L (0.5, 0.75, 1.0 and 1.5) are plotted.Furthermore, when a / L = 1.0, the acting shear stress τ is 7 types (11, 14, 22, 29, 33, 43, 58 MPa), so the number of plots shows the maximum of 7 points.When a / L = 0.5, the error rate exceeds 20 % at any loaded shearing stress level, but in other cases, it can be confirmed that the error rate is within ±10 %.
As a result, the KII analysis method proposed in this study suggests the range of a / L = 0.75 to 1.5 is currently useful.
Conclusions
In this study, we attempted to analyze the stress intensity factor II e.in the in-plane shear mode using the 2-axis orthogonal strain gauges and the tensile shear plate specimens.Furthermore, the II e.analysis accuracy was verified by comparing with the theoretical values II the.obtained by the extrapolation method using the results of FEM.In addition, the effects of crack lengths a and the distances between both cracks L on the analysis accuracy of II e.were researched.The results and findings obtained so far are as follows.1) KII analysis method using the 2-axis orthogonal strain gauges and the tensile shear plate specimens was proposed.2) Theoretical values II the.for the tensile shear plate specimens were obtained using FEM analysis and extrapolation method.
Table 2. Theoretical KⅡ the.for each specimen Fig. 8. Verification result of accuracy for KⅡ analysis Fig. 9. Effect of ratio a / L on KⅡ analysis accuracy 3) It was confirmed that the II e.analysis accuracy of the proposed method was within ± 10 % except for some specimens.This suggests that the ratio a / L to the crack length a and the distance between cracks L is useful in the range of 0.75 to 1.5.
Fig. 3 .Fig. 4 .
Fig. 3. Dimension and geometry of tensile shear plate specimen with the artificial cracks The intersection (intercept) with the abscissa F is the extrapolated value of the KII value, and these values were regarded as theoretical values II the. in this study.The II the.values for each load level are 1.558, 3.116, 4.675, 6.223 MPa√m, respectively.It should be noted that these values are for a / L = 0.75 (specimen No. 2, a = 15, L = 20).Table
Fig. 7 .
Fig. 7. Theoretical value II the.obtained by extrapolation method using FEM result
Table 1 .
Variations of specimen value.In order to investigate the effects of crack length a and crack distance L on the analysis accuracy of KII values, the five types of specimen s shown in Table1were used. | 3,656.8 | 2022-01-01T00:00:00.000 | [
"Materials Science"
] |
In situ phase contrast X-ray brain CT
Phase contrast X-ray imaging (PCXI) is an emerging imaging modality that has the potential to greatly improve radiography for medical imaging and materials analysis. PCXI makes it possible to visualise soft-tissue structures that are otherwise unresolved with conventional CT by rendering phase gradients in the X-ray wavefield visible. This can improve the contrast resolution of soft tissues structures, like the lungs and brain, by orders of magnitude. Phase retrieval suppresses noise, revealing weakly-attenuating soft tissue structures, however it does not remove the artefacts from the highly attenuating bone of the skull and from imperfections in the imaging system that can obscure those structures. The primary causes of these artefacts are investigated and a simple method to visualise the features they obstruct is proposed, which can easily be implemented for preclinical animal studies. We show that phase contrast X-ray CT (PCXI-CT) can resolve the soft tissues of the brain in situ without a need for contrast agents at a dose ~400 times lower than would be required by standard absorption contrast CT. We generalise a well-known phase retrieval algorithm for multiple-material samples specifically for CT, validate its use for brain CT, and demonstrate its high stability in the presence of noise.
the brain in mouse fetuses 14 , to date no in vivo or in situ phase contrast X-ray CT (PCXI-CT) of a brain within a substantially calcified skull has been published, which is likely due to the significant artefacts that typically arise from the skull upon back-projection.
Recent studies have demonstrated that PCXI-CT is an effective tool for small animal studies of the brain, providing high resolution images of tissue structures and clear delineation between grey and white matter [15][16][17][18][19] . Beltran et al. 17,20 . showed a 200-fold increase in signal-to-noise ratio using PCXI-CT over absorption contrast, indicating that in situ PCXI-CT can lead to very large improvements over conventional absorption contrast CT. A similar example from our data collected at the SPring-8 synchrotron in Japan is shown in Fig. 1, showing the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) improvement that can be achieved with phase contrast and phase retrieval.
These previous studies have all been limited to ex vivo imaging on brains that have been excised from their skulls. While these results show the clear potential for brain PCXI-CT in preclinical studies, in vivo imaging is much more difficult due to the strong phase gradients between tissue and bone as well as the strong absorption by bone, causing artefacts from the skull which obscure structures that would otherwise be well-resolved. Overcoming these artefacts is important for future in vivo preclinical research using imaging and, ultimately, for adaptation to the clinic. Herein we demonstrate the first visualisation of the brain in situ in a small animal model, performed using propagation-based PCXI-CT.
Background
Propagation-based phase contrast imaging. Propagation-based imaging (PBI) is the simplest phase-contrast method, wherein the phase shifts resulting from refraction within the object are converted to intensity variations via propagation between the object and the detector, with no optical components required along the path. As the wavefront propagates, small differences in phase accumulate between contrasting materials and/or changing thicknesses so that Fresnel fringes become clearly visible at the detector. The experimental setup for PBI differs from conventional X-ray absorption imaging only in the distance between the imaged object and the detector and in the requirement of a source with sufficiently high spatial coherence 21,22 .
As incident X-rays pass through a sample, the intensity measured downstream at the detector contains a combination of attenuation and phase information. In Fig. 2, a simple, single-material sample is shown with the resultant image at the detector exhibiting contrast that is mostly due to attenuation in the regions within and surrounding the sample; however the phase effects are clear at the boundaries between materials of differing projected refractive index (in this case, the sample material and the surrounding air). This results in the fringe pattern discussed above, seen on the right side of the figure. This fringe pattern can be used to extract quantitative information about the sample, as described in the next section. (a) A single propagation-based projection image of an excised close-to-term rabbit kitten brain, suspended in agarose. The bracket indicates the position of the brain in the tube. (b) Phase contrast tomographic slice of the rabbit kitten brain in (a). Note that the visible structures can only be seen due to the phase shifts imparted by propagation and would not be visible at all in absorption contrast CT. (c) Phaseretrieved tomogram of the brain from (a) and (b). CTs were acquired at 24 keV with an object-to-detector distance of 5 m. See methods section for experimental details. Simplified phase retrieval. The behaviour of an X-ray wavefield, as it passes through an object, is governed by the complex refractive index of the sample, which, for a monochromatic incident wavefield, ignoring polarization, is given by, At each point r in the sample, the real part δ is known as the refractive index decrement, describing the phase component, and β = λμ/4π is the attenuation index, describing the absorption component, where λ is the wavelength and μ is the attenuation coefficient of the material.
Under the projection approximation, which assumes that the transverse scattering contribution to the deflection of X-rays through a sample is negligible, the intensity immediately downstream of the sample (i.e. the detector side of the sample) is given by Beer's Law, r z dz 0 ( , ) where I 0 is the intensity of the monochromatic plane waves assumed to be incident on the sample, z is the direction of propagation, and r ⊥ represents the coordinates perpendicular to propagation. Similarly, the phase is given by, For a single-material object, these become, is the projected thickness of the sample. Since the intensity of a propagating wave is I(r) = |ψ(r)| 2 , the propagating wavefield ψ can be represented as: In the near-field regime (Fresnel number N 1 F ), the transport-of-intensity equation 23 holds: Here, ∇ ⊥ denotes the gradient operator in the x−y plane perpendicular to the optic axis z. From this equation and from the intensity and phase given above under the projection approximation, Paganin et al. 24 derived the following expression to recover the projected thickness for a single-material sample, is the intensity at the detector plane and R 2 is the sample-to-detector distance. Equation (7) was expanded upon by Beltran et al. 20 for the case of a two-material sample, giving: where the subscripts 1 and 2 refer to the two different materials, the first embedded within the second, and A(r ⊥ ) is the combined projected thickness, A(r ⊥ ) = T 1 (r ⊥ ) + T 2 (r ⊥ ). Determining A(r ⊥ ) is not easy, or may only be possible with limited accuracy, depending on the sample. Instead, we diverge slightly from their method, as outlined below.
Recovery of the exit surface intensity of an object from projection images of one material embedded within another can be achieved by modifying the work of Beltran et al. 20 . Their Equation (8) is Since A(r ⊥ ) is assumed to be a slowly varying function of r ⊥ , the denominator of the left-hand side of Equation (9) can be approximated as a constant when a spatial derivative such as ∇ ⊥ 2 is applied to it. When both sides of Equation (9) are multiplied by this term, it can therefore be grouped with the exponential term on the right-hand side to give An alternative approach is to consider that, for a two-material sample under the projection approximation, Equations (4) become, Substituting for T 1 (r ⊥ ) using Equation (10) gives, Assuming, again, that A(r ⊥ ) can be approximated as a constant, this results in a multiplicative change in the measured intensity and an additive shift in the phase, which disappears on differentiation in Equation (6); hence, this becomes like a single-material object where δ = δ 2 − δ 1 and μ = μ 2 − μ 1 in Equation (7).
Following Paganin et al. 24 , Beltran et al. 20 and Gureyev et al. 25 , the "absorption" contrast image can be recovered using the Fourier derivative theorem: (8)). We also note that, for tomography, we need only recover the absorption contrast image before using a tomographic reconstruction algorithm. In the slightly different context of phase retrieval of several sharp boundaries in tomography, Haggmark et al. 26 recently came to the same equation, also utilising the assumption of a slowly-varying thickness in addition to assuming a linear relationship between δ and μ for different materials at a given energy.
Single image phase retrieval of a brain in situ. For materials of similar composition (and hence
refractive index), such as the grey and white matter of the brain, the single-material phase retrieval algorithm applied with respect to the tissue/air interface works quite well to resolve the boundaries between these materials. Structures that are unresolved or poorly resolved with attenuation contrast alone become visible upon phase retrieval, since the phase retrieval filter suppresses noise, thereby increasing the SNR and CNR. Ideally, phase retrieval performed with respect to the grey/white matter boundary provides the best contrast resolution of brain structures, however when imaging the brain in situ, the inaccurate phase retrieval at the bone/tissue interface causes excessive blurring that overwhelms the features within the soft tissue of the brain. Figure 3 shows the in situ analogue to Fig. 1. In panel 3c, phase retrieval has been performed with respect to bone/air interface, resulting in an over-blurring at both the grey/white matter and bone/tissue boundaries. Nevertheless, the brain structure is more clearly resolved in Fig. 3c than in Fig. 3b, despite the incorrect phase retrieval at the bone/tissue interface and the associated artefacts caused by the the highly attenuating bone. In panel 3d, phase retrieval has been performed with respect to the tissue/air interface, again resulting in a highly-blurred reconstruction. In this case, brain features are very well resolved (panel 3e) but are dominated by the bone artefacts.
When phase retrieval is performed with respect to the bone/tissue interface using Equation (15) and μ and δ values derived from the NIST physical reference databases 27,28 (Δδ/Δμ = (δ 2 − δ 1 )/(μ 2 − μ 1 ) = 5.66 × 10 −10 ), both interfaces are better resolved with a more consistent resolution across the image than with the single-material algorithm (δ/μ = 1.54 × 10 −9 and δ/μ = 8.60 × 10 −9 for bone and brain tissue, respectively). These results are shown in panel 3 f, where it can be seen that the artefacts due to the highly absorbing bone are largely abated. The remaining artefacts fall into two types -ring artefacts and streak artefacts -and are ones which should ideally be corrected for prior to the phase-retrieval step. Streak artefacts across high-contrast edges are caused by multiple factors, and determining the dominant contributing factors is important for improving the SNR and CNR for in situ brain imaging.
Streak artefacts in CT.
Numerous phenomena contribute to the formation of streak artefacts in CT reconstruction, particularly along high contrast edges. The three main causes are (1) insufficient data within the dynamic range of the detector, (2) noise, (3) physical effects that reduce contrast and resolution. These all create strong artefacts when there are large attenuation gradients within the sample, particularly when trying to detect subtle variation in soft-tissue contrast (e.g. grey/white matter). For a brain in situ, artefacts from the bone that are not usually problematic become important, since they can overwhelm the underlying, relatively low-contrast brain structures. Even a sub-pixel offset in the centre of rotation correction can create obvious tuning-fork artefacts 29 from the bone that are not apparent when viewing an intensity palette that is scaled to include higher-density structures and hence a larger contrast range. The following phenomena are the primary contributors to streak artefacts in CT: • Photon starvation: This phenomenon occurs when an insufficient number of photons reach the detector through a highly-attenuating region of a sample, such as a metal implant or bone. When this occurs, the signal at that part of the detector is dominated by noise, leading to streak artefacts on reconstruction. There are a number of methods employed to correct for these artefacts, many of which involve thresholding and interpolation of the sinogram. These methods work quite well for compact regions where the region can easily be removed by thresholding in the sinogram without interfering significantly with neighbouring regions (for a discussion of these techniques, see Mouton et al. 30 ). • Beam Hardening: The relationship between the logarithm of the X-ray intensity transmitted through a sample and the sample thickness is linear at any given energy (Beer's Law) under the projection approximation; however, this is only strictly true for a monochromatic source. Lower-energy photons are absorbed more readily than higher-energy photons, leading to a deviation from this linearity for a polychromatic source (i.e. beam hardening). This change in the attenuation profile from the monochromatic case results in cupping artefacts through individual materials and streak artefacts along edges between different materials in the tomographic reconstruction. • Energy Harmonics: For synchrotron sources, monochromators are used to limit transmission of the source to a narrow band of energies. Energies outside the desired value can cause a form of beam hardening if the spectral bandwidth is too broad or if higher-order harmonics of the monochromator crystal are allowed to pass through. In the former case, a standard beam-hardening correction can be applied, while the latter requires a correction that accounts for the specific energies associated with the higher order harmonics. • Compton scattering: Compton scattering occurs as X-rays pass through the sample and can contribute to streak artefacts by increasing the background intensity, hence decreasing the contrast across edges. The scattering cross-section is lobed in the forward direction but amounts to a relatively uniform contribution for the energies and fields of view typically used for small animal imaging. This effect diminishes on propagation and is expected to have a small effect for PBI imaging at object-to-detector propagation distances over 2 m. • Poisson Noise: Noise in an imaging system has a similar effect to Compton scattering, with the exception that noise exhibits substantial spatial variations while Compton scattering is typically spatially smooth. Like Compton scattering, it will vary the background signal, resulting in a change in the attenuation gradient across high-contrast boundaries. This leads to an over-or under-estimation of the attenuation in the regions tangential to these edges in the CT acquisition plane. • Point Spread Function (PSF): Imperfect detector response plays a significant role in streak artefacts, since it results in a decrease in image resolution, leading to blurring across edges. Deconvolution of the PSF sharpens the edges, but since it also amplifies noise, there is a tradeoff in doing so. This is covered in more detail in the results section under 'simulation and experiment' . • Edge Effects: The discrete nature of detector pixels means that there is a spatial averaging of the signal at each pixel when a sharp edge lies within the boundaries of a pixel 31 . The relevance of this effect increases with increasing pixel size, so it can be minimised with the use of high-resolution detectors. Edge effects are also tied in with the detector PSF and can therefore be accounted for, to some degree, in the process of deconvolution. • Centre of rotation offset: When acquiring a parallel-beam CT, it is nearly impossible to position the sample such that the centre of rotation corresponds exactly to the centre or edge of a pixel, resulting in an offset that must be accounted for when reconstructing. This is relatively easy to do, and there exist a number of different methods to determine the offset (for discussion of some of these methods, see Vo et al. 32 ). In most cases, it is sufficient to determine this offset to the nearest pixel. When it is not sufficient, the effect can be minimised to a local blurring, rather than streaks, by acquiring projections across 360° rather than 180°. Alternatively, we find that by mirroring the projections of a 180° CT and reconstructing the volume as if it were acquired over 360°, the same effect can be achieved.
Compton scattering can be ruled out as a significant contributor to the streak artefacts in our experiments, since the scattered photon density is both proportional to the sample size and inversely proportional to the object-to-detector distance 33 . We expect the scatter-to-primary ratio to be less than a few percent due to the very small beam size (3 × 3 cm) and large propagation distance (5 m) at relatively low energy (24 keV). Several of the other effects described above are discussed further in the results section.
Ring artefacts. Synchrotron CT is also subject to ring artefacts, which are due to temporal variations in the intensity of the beam and possibly also non-linearities in the detector response, both of which prevent proper correction using a 'flat-field' image. Minimising ring artefacts is particularly challenging for in situ brain imaging, since many ring removal methods exploit their circular symmetry (see Münch et al. 34 and Prell et al. 35 for examples); since the skull is also largely circularly symmetric, this makes it difficult to decouple signal from artefact, resulting in an over-correction of some skull regions, where sections of bone are interpreted as artefacts, and an associated under-correction of regions that are diametrically opposite. While the development of effective ring artefact correction algorithms is essential for preclinical studies of the brain, it is omitted here as beyond the scope of this work; however it should be noted that a better option may be to prevent these artefacts from forming in the first place by shifting the sample in regular manner with respect to the detector (e.g. by translating the sample or detector vertically) during acquisition of the CT. This would prevent the variations in pixel sensitivity that are responsible for the ring artefacts from being consistently present in a single reconstruction plane. Another possibility is to address the problem where it initiates by more accurately characterising the detector components, as well as any time fluctuations in the X-ray source, in order to properly account for them.
Results
Streak artefacts from limited spatial resolution: simulation and experiment. To explore the effects that the detector PSF has on CT imaging, an absorption contrast CT of an aluminium rod in air was simulated, assuming an X-ray source energy of 26 keV and using the corresponding attenuation coefficient for aluminium, μ = 445.23 m −1 . The projection images were then convolved with a 2D Gaussian (σ = 30.2 μm) before back projection. The simulation results are shown in Fig. 4, where panel 4a is the initial ideal object and panel 4b is the resulting reconstruction after PSF blurring. The latter is still a reasonable representation of the ideal object when the intensity palette is scaled to display the full intensity range; however, when the scaling is adjusted to highlight the subtle variations in the background region (4c), dark streak artefacts are clearly evident emanating from the high-contrast edges, with larger effects along longer edges. Figure 5 shows a reconstruction of the aluminium phantom, measuring approximately 5 mm in length in the CT acquisition plane. An energy of 26 keV was used for imaging to ensure sufficient transmission through the relatively dense aluminium. Panel 5a shows the image scaled to include the full greyscale palette in the same way as the simulation in panel 4b, clearly delineating the aluminium strip with only minimal artefacts. When scaled to the background palette (panel 5b), the extent of the artefacts is more apparent. In the inset image, dark streak artefacts can be seen that resemble those in the PSF simulation of panel 4c. The detector line spread function was measured to sub-pixel accuracy in two orthogonal directions using a 250 μm thick tungsten edge and a Pearson type VII function 36 : The PSF was then approximated by a 2D elliptical Pearson VII function: where (x 0 , y 0 ) is the centre of the PSF image, m is the larger of the two parameters m from the orthogonal line spread function fits, and a h and a v are the amplitudes of the horizontal and vertical fits, respectively. The Pearson type VII function was chosen to represent the PSF, because it ensures that the denominator in the deconvolution is always greater than zero. The exact parameters used for this fit are c = 0.0701, m = 3.222, a h = 1.443, and a v = 1.477. The PSF was found to be much broader than expected, with visible effects across sharp edges spanning ~80 pixels. The full-width at half-maximum (FWHM) is ~2.6 pixels. When the PSF is deconvolved using Wiener deconvolution, the effect of the streak artefacts is improved, but there is clearly still a large component remaining (panel 5c), indicating that other effects are involved. These results show substantial artefacts that dominate the background of the image in the regions immediately surrounding the metal phantom. In many ways, the skull is more forgiving in that its roughly circular symmetry results in an averaging out of many of the streak artefacts in addition to restricting the bulk of them to the region outside its boundary; however, the skull is also the biggest obstacle in correcting those artefacts as it completely surrounds the brain. This fact rules out the use of many of the most common metal artefact reduction techniques available, which often involve some form of thresholding and in-filling of the sinogram, that might otherwise be applied to reduce bone artefacts. Even with as precise as possible thresholding and adaptation of a normalisation method similar to that used by Meyer et al. 37 , we find that the skull is too broadly distributed across the sinogram to differentiate it from soft tissue for in-filling. Iterative reconstruction methods, however, may prove to be useful for artefact reduction 30 . Harmonic contamination measurement. Finally, a harmonic contamination test was performed to determine the significance of the monochromator harmonics. Images were acquired with aluminium sheets of increasing thickness placed in front of the detector until the measured signal fell to that of the dark current. The negative natural logarithm of the mean normalised intensity, , was plotted as a function of aluminium thickness and fit, using least squares minimisation, as: where μ F and μ H are the attenuation coefficients corresponding to the fundamental and third harmonics of the Si (1 1 1) monochromator crystal, T is the thickness of the aluminium, and a is the fractional contribution of the fundamental harmonic to the mean intensity at the detector. The fit can be seen in Fig. 6, with a value of a < 1%. The X-ray conversion efficiency decreases substantially with increasing energy 38 ; as expected, we find the contribution of the third harmonic to be an insignificant factor for the streak artefacts created on back projection.
The phenomena outlined in the previous few sections are the most common causes of streak artefacts in CT. These simulations and experiments have shown that the artefacts cannot be easily explained by one individual source. The phenomenon that clearly does contribute -the PSF -is not sufficient to account for the full extent of the artefacts. We conclude that there are likely a number of different phenomena contributing that, while occurring individually might only have a small affect, together amount to a much larger cumulative effect. Isolation and correction of these effects remains the subject of future work.
Rabbit kitten brain CT. The full volume of the in situ rabbit kitten data set was reconstructed using filtered back projection (FBP) and rotated to create axial, sagittal, and coronal views for both absorption contrast and phase retrieved PBI CT. Absorption contrast CTs were acquired with the detector as close to the sample as physically possible, while phase contrast was achieved through propagation of the wavefield. Phase retrieval was performed using μ and δ values derived from the NIST physical reference databases 27,28 (Δδ/Δμ = 5.66 × 10 −10 ). Several slices of the absorption contrast CT volume can be seen in Fig. 7. The views are denoted in blue, red, and green for axial, sagittal, and coronal, respectively, and the crosshairs on each panel correspond to the locations from which the other two views in that row are cut. Note that all of the images in Fig. 7 are conspicuously featureless apart from some bone. However, in the corresponding phase retrieved views in Fig. 8, grey and white matter boundaries are resolved, and several specific brain features are clearly delineated. The overall SNR gain with phase contrast brain CT was found to be 19.7 ± 1.5. This was determined using six of the flattest regions in each image, where the SNR is the ratio of the mean to the standard deviation of each region. This number may seem surprisingly small, given the marked improvement between Figs 7 and 8; however, since the radiation dose required is inversely proportional to the square of the gain 9 , we can see that ~400 times more dose would be required to obtain the same result with conventional absorption contrast CT.
Remarkably, the brain structures visible in these images are not obscured by streak or ring artefacts. This is due to the volume being rotated with respect to the CT acquisition plane in which the artefacts are created; hence the artefacts are minimised. In panel 8a, the frontal lobe, frontal cortex, and striatum can be seen. Panel 8d shows the parietal cortex, hippocampus and thalamus, and in panel 8 g the frontal cortex, corpus callosum, and caudate nucleus are all clearly resolved. The strongest streak artefacts can be seen in the axial images. This is because the axial orientation corresponds to only slightly acute angles with respect to the acquisition plane, while the angles of the other orientations are much larger. For a purely monochromatic source, this is expected to be a straight line as per Beer's Law. This deviation from linearity is due to contribution from the third harmonic of the monochromator crystal, which becomes dominant at larger thicknesses. The slope of the curve below ~20 mm corresponds to the attenuation coefficient of Al at the fundamental energy, and at larger thicknesses it corresponds to that of the third harmonic. In general, streak artefacts pose a particular problem for brain PCXI-CT due to the very low contrast between structures within the brain. Streaks that are not distinguishable above the noise in absorption contrast CT can dominate brain structures once that noise has been suppressed on phase retrieval, since they often display similar or even higher contrast. The many possible causes of these artefacts can also be very difficult to decouple. Nevertheless, we find that by paying particular care to the orientation of the sample in the CT acquisition plane, we can minimise these effects. Some residual streak artefacts still persist, and further investigation is required to determine the most appropriate method by which to correct for these effects. For preclinical studies (e.g. tissues from deceased animals or brief terminal experiments under anaesthesia) where dose is not an issue, or for studies involving non-biological samples, it should be possible to eliminate all of these artefacts by acquiring two or three CTs in orthogonal orientations.
Ring artefacts can also be seen in the lower part of the coronal images of Fig. 8. As with the streak artefacts, these are minimised by sectioning slices at an angle with respect to the CT acquisition plane. For this volume, the coronal orientation is offset by 40° from the acquisition plane. This means that fewer consecutive image pixels contain ring artefacts originating from the same detector elements, reducing structure in the artefacts. The artefacts, however, persist to some degree along the readout direction of the detector. This results in a diffuse band that can be seen across panel 8b from the lower centre of the image, running diagonally upward and to the right, through the cerebellum. This effect is most prominent at the centre of rotation of the sample and becomes less so at larger radii.
It should also be noted that it is of particular importance for phase contrast brain imaging to accurately account for all of the phenomena that cause each of the different types of artefact. There are many correction methods that work effectively for absorption contrast CT imaging that are insufficient for the low-contrast edges that are enhanced using phase contrast imaging, as the latter exposes effects of these artefacts that, while present in absorption contrast imaging, are not generally problematic as they are below the noise threshold. With further development of artefact correction methods (e.g. PSF deconvolution, improved detector characterisation, modelling source variations, etc.), we anticipate that the SNR and CNR of phase contrast brain CT can be even further improved.
Conclusions
Propagation-based PCXI-CT is shown here to be an effective tool for visualising the brain in situ for preclinical animal studies. While the surrounding skull, temporal fluctuations in the intensity of the source, and detector imperfections provide distinct challenges with respect to reconstruction artefacts, we find that there are ways to work around these limitations to see brain structures that might otherwise be obscured. We present a two-material phase retrieval algorithm for tomography, which was shown to be highly effective at delineating soft tissue from bone. In addition, we find that by changing the 'sectioning angle' of the 3D volume, we are able to significantly reduce contamination from streak and ring artefacts. We have identified the most problematic causes of these artefacts, and while further work will be required to address these phenomena, it is clear that it is already Fig. 7, now with phase contrast and phase retrieval. As with Fig. 7, axial, sagittal, and coronal views are marked in blue, red, and green, respectively. Circularly symmetric ring artefacts can be seen as a white blurring toward the bottom of panels (c) and (i). These also manifest as a diffuse white band in panel (b), running from the lower centre of the image toward the cerebellum (Cb). Note that these artefacts are not visible in the absorption contrast images in Fig. 7, since there they are below the level of the noise. Reference labels delineated in coronal sections are observed at the level of the frontal cortex (FCx) in images (a) and (g), and level of the parietal cortex (PCx) in (d), showing the caudate nucleus (CN), hippocampus (Hip), thalamus (Th) and corpus callosum (CC).
SCientifiC REPORTS | (2018) 8:11412 | DOI:10.1038/s41598-018-29841-5 possible to identify structures that have previously been unresolved with conventional X-ray CT. The substantial SNR gain achieved using PCXI-CT has no requirement for contrast agents and allows for the visualisation of features that would otherwise require a ~400-fold increase in the radiation dose required to obtain the equivalent results with conventional absorption contrast CT.
Methods
This experiment used rabbit kittens that had been used in experiments conducted with approval from the SPring-8 Animal Care (Japan) and Monash University (Australia) Animal Ethics Committees. All experiments were performed in accordance with relevant guidelines and regulations. The kittens were humanely killed in line with approved guidelines and the carcasses scavenged for this experiment.
To examine CT streak artefacts from strongly-absorbing samples, simulations were performed of a CT of an aluminium rod with a length and thickness designed to mimic those of the rabbit kitten skulls in the CTs discussed below. For comparison, a CT was experimentally acquired of an aluminium phantom at 26 keV at the shortest feasible propagation distance of 12 cm. The phantom consisted of a 0.3 mm thick strip of 99% pure aluminium sheeting suspended in agarose and was specifically designed to aid determination of the primary cause of the streak artefacts seen in in situ brain imaging by mimicking the projected thickness of the highly attenuating skull. The phantom CT was acquired using a 4000 × 2672 pixel Hamamatsu CCD camera (C9300-124) with a tapered fiber optic bonded between the CCD chip and the 20 μm thick gadolinium oxysulfide (Gd 2 O 2 S; P43) phosphor, with an effective pixel size of 16.2 μm. Each CT consisted of 1800 projections spanning 180° of rotation, with an exposure time of 80 ms per projection.
To visualise the brain, CTs were acquired of a scavenged New Zealand White rabbit kitten head and an excised rabbit kitten brain, both at 30 days gestational age (GA; term ~32 days), suspended in agarose. They were acquired at an energy of 24 keV and at a 5 m sample-to-detector propagation distance using a 2048 × 2048 Hamamatsu digital sCMOS camera (C11440-22C) with a 25 μm thick gadolinium oxysulfide scintillator and a pixel size of 15.1 μm. 1800 projections were acquired over 180° for both CTs, with an exposure time of 200 ms and a dose rate of 22.4 mGy/s. Further CTs were acquired at a higher resolution in order to test the effects of detector resolution on streak artefacts. These consisted of a rabbit kitten brain in situ, excised from a scavenged New Zealand White rabbit kitten at 29 days GA, suspended in agarose, at propagation distances of 5 m and 11 cm. Both were captured at an exposure time of 83.3 ms using a second 2048 × 2048 Hamamatsu digital sCMOS camera (C11440-22C) with a straight fibre optic and a 15 μm thick gadolinium oxysulfide scintillator with a pixel size of 6.49 μm. Due to the projected sample size being larger than the detector field of view, 7200 projections were taken through 360° of rotation, which were later stitched together with linear blending to create a single dataset of 3600 projections spanning 180°, with a dose rate of 33.9 mGy/s. All CTs were acquired at a source-to-object distance of 210 m on beamline BL20B2 at the SPring-8 synchrotron in Japan and were reconstructed using FBP. Reconstructions were performed on the MASSIVE supercomputer in Melbourne, Australia using the ASTRA Toolbox CUDA accelerated FBP algorithm 39 .
Data Availability. The datasets supporting the findings of this study are available from the corresponding author on reasonable request. | 8,003.8 | 2018-07-30T00:00:00.000 | [
"Physics"
] |
miRNA-221 promotes cutaneous squamous cell carcinoma progression by targeting PTEN
Background Cutaneous squamous cell carcinoma (CSCC) is a common type of skin malignancy. MicroRNA-221 (miRNA-221) is a critical non-coding RNA in tumor initiation and progression. However, the molecular mechanisms of miRNA-221 in the development of CSCC remain unknown. This study investigated the expression of miRNA-221 in CSCC and its potential tumor biological functions. Methods MTT assay, colony assay, PCR, and Western blot were adopted. Results In this study, miRNA-221 expression was significantly higher in CSCC tissues and cell lines than in normal tissues and cells (P < 0.05). Further functional experiments indicated that miRNA-221 knockdown inhibited the proliferation and cell cycle, while upregulation of miRNA-221 presented the opposite role. The dual reporter gene assays indicated that PTEN is a direct target gene of miRNA-221. PTEN protein or mRNA levels were decreased after the cells were transfected with miR-221 mimics. Conclusions Taken together, the obtained results indicated that miR-221 plays an oncogenic function in CSCC by targeting PTEN and further suggest that miR-221 may be a potential target for CSCC diagnosis and treatment.
miR-221 is a member of the miR-221/222 cluster, which is located on the X chromosome. miR-221 has several conserved seed sequences which are identical to its homologous miRNA, miR-222 [12]. In particular, miR-221 expression level is reported to be up-regulated in several types of human cancers, including hepatocellular carcinoma [13], prostate cancer [14], and colon cancer [15], suggesting the oncogenic role of miR-221 in cancer initiation and progression. Conversely, in lung cancer miR-221 exhibits tumor suppressor roles [16]. The authors reported that miR-221 suppressed growth in four lung cell lines. The above results highlighted the dual functions of miR-221 in different cancer types. Therefore, it is necessary to explore the specific role of miR-221 in specific cancer types. The role of miR-221 in the progression of CSCC and its underlying mechanism, however, remains obscure.
In the current investigation, we found that miR-221 was significantly up-regulated in CSCC tissues and cell lines. In vitro experiments showed that silencing of miR-221 promoted CSCC cell growth. We also identified PTEN as a target of miR-221. Together our data suggest that miR-221 plays an oncogenic role in CSCC and further imply that miR-221 may be a novel target for diagnosis and treatment of CSCC in the near future.
Tissue samples
A total of 64 pairs of CSCC tissues and adjacent non-tumor tissues were collected from patients. All tissues were immediately stored in liquid nitrogen after surgery. This study was approved by the Medical Ethics Committee of The First People's Hospital of Nantong. Written informed consent was collected from each patient.
Cell culture
CSCC cell lines (SCC13, A431, HSC-5 and SCL-1) and human normal skin cell line (HaCaT) were bought from the Cell Bank of Type Culture Collection, Chinese Academy of Sciences (Shanghai, China). Cells were maintained in RPMI-1640 (GIBCO, US) along with 10% fetal bovine serum at 37°C, in a humidified air with 5% CO 2 .
Cell transfection
Cells (3 × 10 5 /per well) were seeded into 6-well plates for 12 h. 50 nM miR-221 mimics, miR-221 inhibitor or scrambled miRNA control (miR-NC) (GenePharma, Shanghai, China) were mixed with Lipofectamine 2000 reagent (Thermo Fisher Scientific, Inc.) for 15 min at room temperature in 200 μL of FBS-free medium. After that, the mixed medium was added into the well. After 48 h, total protein and RNA were extracted and then subjected to Western blotting and qRT-PCR analysis.
RNA extraction and qRT-PCR
Total RNA was extracted from tissues and cell lines using the Trizol reagent (Invitrogen, Carlsbad, CA, USA). 1 μg of RNA was then reversely transcribed into cDNA using PrimeScript RT master Mix (Takara, Dalian, China). Then the PCR reaction was performed in the Applied Biosystems 7900 Fast Real-Time PCR system (Applied Biosystems, Foster City, California, USA) under the guidance of the manufacturer's protocols. GAPDH and U6 were used for normalization for mRNA and miRNA, respectively. 2 -ΔΔCt method was adopted to calculate the relative gene expression.
Cell proliferation assay
After transfection, cells (2 × 10 3 cells/well) were seeded into 96-well plates and maintained at 37°C in 100 μl of culture medium. After transfection for 24, 48, 72 and 96 h, MTT (5 mg/mL, 30 μL) was added to each well Then, after removing the medium, 100 μL of DMSO (Sigma) was added to solubilize the crystals and the absorbance was measured at 450 nm. Experiments were performed in triplicate and repeated at least three times independently.
Colony formation assay
Cells were seeded at 500 cells in 6-well plates with 0.6% agarose underlay. After 14 days, colonies stained with crystal violet (Sigma-Aldrich) with more than 50 cells were counted.
Cell cycle assay 1 × 10 6 cells per well were seeded into the 6-well plates and incubated overnight. After transfection for 48 h, cells were harvested and washed gently with PBS. Cells were stained with propidium iodide (PI) and then were analyzed by the Modfit LT software (Verity Software House, US).
Western blot
The proteins was extracted in RIPA buffer and then separated in 10% SDS-PAGE, and transferred onto PVDF membranes (Millipore, Billerica, MA, USA). Membranes were blocked with 5% (v/v) milk first and then incubated with primary antibodies overnight at 4°C. After incubating with horseradish peroxidase (HRP)-conjugated anti-mouse antibody (1:2000) (DakoCytomation), protein blots were visualized by ECL (GE Healthcare). β-actin was used as a loading control.
Luciferase report assay
The wild type (WT) and mutant type (MUT) of PTEN 3′-UTR luciferase reporter gene plasmids were generated by Yearthbio (Changsha, China). The cells were then co-transfected with miR-NC or miR-221 mimic, and with the WT or MUT of PTEN-3′-UTR reporter plasmid using Lipofectamine 2000 reagent. After incubation at 37°C for 48 h, luciferase activities were detected using dual-luciferase activity assays (Promega, Madison, WI, USA).
Statistical analysis
Data are presented as the mean ± SD (standard deviation). Statistical significance for differences between groups was determined by Student's t-test or one-way ANOVA. P values of < 0.05 were considered significant.
miR-221 level is increased in human CSCC tissues and cell lines
The miR-221 levels were determined by RT-qPCR in CSCC and adjacent non-tumor tissues. As shown in Fig. 1a, miR-221 expression was upregulated in the CSCC tissues compared with the adjacent noncancerous tissues. Consistently, all involved CSCC cell lines (SCC13, A431, HSC-5 and SCL-1) had significantly higher miR-221 levels than the human normal skin cell line HaCaT (Fig. 1b).
miR-221 promotes growth of CSCC cells
RT-qPCR was adopted to detect the level of miR-221 after miR-221 mimics or inhibitor treatment. The results showed that miR-221 mimics enhanced the expression of miR-221 in A431 cells (Fig. 2a), and miR-221 inhibitor dramatically downregulated the expression of miR-221 in SCC13 cells (Fig. 2b). To determine the exact functional roles of miR-221 in CSCC cell lines, we evaluated the cell proliferation by MTT after miR-221 mimics or inhibitor transfection. We observed that up-regulation of miR-221 significantly promoted cell proliferation (Fig. 2c), while down-regulated expression of miR-221 significantly inhibited cell proliferation (Fig. 2d). Moreover, the colony formation assay indicated that cells transfected with miR-221 mimic formed more colonies than cells transfected with control (Fig. 2e), and the opposite result was found in the cells transfected with miR-221 inhibitor (Fig. 2f).
miR-221 promotes cell cycle of CSCC cells
We further used flow cytometry assay to examine the impact of miR-221 in the cell cycle distribution. We observed that the G0/G1 phase fraction of the control group was less than that of the miR-221 mimic group, with 43.4 ± 5.8% compared to 67.5 ± 6.1% (Fig. 3a), whereas knockdown of miR-221 in cells had fewer cells in the G0/G1 phase, but more cells in the G2/M phase (Fig. 3b). These results revealed that miR-221 can promote the progression of the cell cycle.
PTEN is a direct target of miR-221
We first used the TargetScan bioinformatics algorithm to explore the underlying mechanisms by which miR-221 exerts its function. PTEN was predicted as a potential target ( Fig. 4a). Dual-luciferase reporter assay verified that miR-221 impaired the luciferase activity of the wild type PTEN 3′-UTR (WT) but not the MUT 3′-UTR of PTEN in cells (Fig. 4b). Gene expression analysis indicated that PTEN mRNA expression was decreased after transfection of miR-221a mimic in cells (Fig. 4c). Similar results were also achieved in Western blot analysis; miR-221 mimic decreased the PTEN level in cells (Fig. 4d). qRT-PCR analysis showed that PTEN mRNA expression levels were lower in CSCC tissues than adjacent non-tumorous tissues (Fig. 4e). Correlation analysis between miR-221 and PTEN mRNA expression in CSCC tissues demonstrated an inverse relationship. In all, miR-221 can directly target PTEN in CSCC cells (Fig. 4f).
miR-221 regulates AKT signaling pathway
We next explored whether the AKT signaling pathway was involved in miR-221 mediated cellular functions miR-221 in CSCC cells. Western blot analysis showed that transfection of cells with miR-221 mimic could enhance pAkt expression (Fig. 5a). In addition, the expression of Bcl-2, cyclin D, MMP2 and MMP9, all of which are regulated by pAkt, was slightly upregulated in the miR-221 mimic group (Fig. 5a). The opposite situation was found in cells transfected with miR-221 inhibitor (Fig. 5b).
Discussion
In this study, we determined that miR-221 is increased in CSCC tissues and cell lines.
We also found that miR-221 regulates several hallmarks of CSCC including cell growth and colony formation. Although the molecular mechanisms of miR-221 are elucidated in several types of cancer, the role of miR-221 in CSCC development still remained poorly understood. Therefore, it is of great value to understand the function of miR-221 in CSCC carcinogenesis. miR-221 has been shown to serve tumor promoting roles in different types of human cancer. Ma et al. found that high expression of miR-221 in exosomes of the peripheral blood was positively associated with poor clinical prognosis of gastric cancer [17]. Li et al. demonstrated that up-regulation of miRNA-221 could target apoptotic protease activating factor-1, which further promotes ovarian cancer cell proliferation and indicates a poor prognosis [18]. Low miR-221-3p expression may lead to the poor prognosis of triple-negative breast cancer patients through regulating PARP1 [19]. However, there exist few reports concerning the potential function of miR-221 in human CSCC progression. In the current study, we elucidated the potential role of miR-221 in the malignant progression of CSCC.
We measured our collection of CSCC clinical samples, and observed that in tumor samples, the expression of miR-221 was higher than in normal tissues. Hence, we speculated that miR-221 may act as a tumor oncogene miRNA and its aberrant expression may be linked with advanced progression of human CSCC. Thus, we set the focal point on the functions and molecular mechanisms of miR-221 in human CSCC. At the cellular level, by transfecting cell lines with miR-221 mimics and miR-221 inhibitor, we found that miR-221 regulates the cell proliferation, colony formation and migration of CSCC cells, crucial steps in tumor progression. We further adopted bioinformatics analysis, TargetScan 6.2, to determine how miR-221 acts as an oncogene. Luciferase activity assay indicated direct targeting of PTEN by miR-221. In this study, we found that miR-221 can specifically target PTEN, and suppress PTEN protein expression. PTEN protein is a classic tumor suppressor in various human cancers. The PTEN gene is located on chromosome 10q23.31 [20,21]. PTEN functions as a negative regulator of the PI3K/Akt pathway through dephosphorylation of phosphatidylinositol 3,4,5 trisphosphate. PTEN is involved in regulation of cellular proliferation, apoptosis and metastasis during progression of cancers [22,23]. Akt is a subfamily of the serine/ threonine kinase family. It modulates the function of numerous substrates related to cell proliferation, apoptosis and invasion and is implicated in the progression of several tumors [24]. In our study, we observed that knockdown of miR-221 in CSCC cells leads to a decrease of the expression level of pAkt, and further influences the expression levels of other Akt-regulated proteins, such as Bcl-2, cyclin D1, and MMP2/9.
Conclusions
In summary, our investigation provides effective evidence for the first time that miR-221 expression is upregulated in CSCC tissues and cells. Moreover, miR-221 promotes cell growth by targeting PTEN. These results provide strong evidence that miR-221 is implicated in the initiation and development of CSCC. All of these data hint that miR-221 may provide a potentially important therapeutic target for human CSCC. | 2,856 | 2019-03-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Iron Doped SBA-15 Mesoporous Silica Studied by Mössbauer Spectroscopy
1 Institute of Computational Intelligence, Czestochowa University of Technology, Aleja Armii Krajowej 36, 42-201 Czestochowa, Poland 2Institute of Physics, Czestochowa University of Technology, Aleja Armii Krajowej 19, 42-201 Czestochowa, Poland 3Institute of Computer Science, Czestochowa University of Technology, Ulica Dobrowskiego 69, 42-201 Czestochowa, Poland 4Rootinnovation Spółka z o.o., Ulica Matejki 11/1a, 50-016 Wrocław, Poland
Introduction
Recently, many research efforts have been focused on design and synthesis of functional materials based on porous silica matrix and transition metal-containing active groups not only due to their attractive architectures but also for potential applications [1][2][3][4][5][6].A wide synthesis activity enabled the synthesis routines control of the mesoporous silica with variable size (2-20 nm) of pores.Additionally, several functional groups grafted either in the pores or in the wall structures were successfully obtained [7][8][9].The synthesis of such samples which would contain the functional groups homogeneously distributed in the host matrices is still the main challenge.These functional groups, if successfully anchored in the silica-based architecture, can offer fine-tuned physical properties (electronic, magnetic, and optical).
The paper concerns the mesoporous silica SBA-15 functionalised by the propyl-iron-phosphonate units.The molar concentration of the propyl-iron-phosphonate units in the silica matrix is 10% (1 silano-propyl-Fe-phosphonate group per 9 SiO 2 groups).Our intention is to report an original approach to probing the synthesis efficiency, in particular, the activation efficiency and homogeneity of the active units distribution.The molecular structure of the species was detected using Mössbauer spectroscopy and confirmed by Raman scattering supported by DFT simulations.As a complementary research we carried out EDX elemental analysis.
Mössbauer spectroscopy is very useful to study structural and physical properties of many inorganic and organic materials, especially iron-containing species.Mössbauer parameters deliver valuable information about the electronic configuration of 57 Fe atoms and their surroundings.On the base of comparative analysis of the Mössbauer spectroscopy carried out for the pure doping agent and target compound it is possible to determine whether the transition of phosphonate units into iron-phosphonate units occurred.Moreover we were able to determine the oxidation state of the iron ion.The electronic state of the iron ion, obtained in the Mössbauer spectroscopy, was confirmed by the EPR spectroscopy.
The Raman scattering measurements combined with numerical models seem to be an adequate approach to check the correctness of molecular configurations in the samples.The assignment of bands in the experimental Raman spectrum is necessary to detect the spectral changes in the hydrogen-bonded active groups.Moreover the EDX elemental analysis helps you find the correct molar proportions between key radicals.
Materials and Methods
Synthesis of the mesoporous silica SBA-15 functionalised by propyl-phosphate-iron units was done by cocondensation of tetraethylorthosilicate (TEOS) and phosphonatepropyltriethoxysilane (PPTES) with presence of surfactant (Pluronic P123).The main part of the procedure was described in detail in our previous work [10,11] with one difference: the iron(II) acetylacetonate (Fe(acac) 2 ) was used as the doping agent.Resulting iron-containing SBA-15 silica compound is called SBA-propyl-POO 2 Fe.
The 57 Fe Mössbauer spectra of powdered samples were recorded at room temperature with a 57 Co source (Rh matrix) in transmission geometry.The velocity was calibrated using a -Fe foil.Mössbauer spectra were analyzed by the leastsquare fitting of Lorentzian lines using the Normos program [12].
For the spin state confirmation of the iron ions we measured our samples with EPR spectrometer.EPR measurements were conducted on a Bruker EMX continuous-wave (CW) X-band (9.5 GHz).The spectra were recorded with microwave power in the range of 20-200 mW using magnetic field modulation of about 5 G. EPR measurements at variable temperatures (10-300 K) were performed by an Oxford Instruments cryostat.
Resonance Raman measurements were carried out at room temperature in the range from 300 to 4000 cm −1 .A Raman spectrometer (Nicolet Almega XR) was equipped with a Nd:YAG laser.The experiments were carried out at 532 nm and the laser was operated at a power level of 40 mW.The spectral resolution of the spectrometer was equal to 1 cm −1 .The same experimental conditions were kept for all the investigated samples.
DFT simulations were carried out in order to find characteristic Raman bands.All theoretical calculations were carried out using the GAUSSIAN 09 package [13] with default convergence criteria applied.The geometry of the model molecule enriched of metal ion was fully optimized at the level of B3LYP [14] with the 6-31G(d,p) basis set, as the most suitable for metal-containing SBA-15 silica model.After geometry optimization, the Raman vibrational modes were calculated using the same method and basis set.The assignment of the calculated Raman bands was done on the basis of PED analysis [15,16] and aided by the animation option of the GaussView 5.0 graphical interface for Gaussian programs [17], which gave a visual representation of the vibrational modes shape.As far as PED analysis is concerned, the calculations were carried out in VEDA software [18].By combining the results of the visualization with potential energy distribution (PED) we obtained very accurate description of the molecules vibrations.The procedure was described in detail in [19].
The EDX quantitive elemental analysis was carried out using the FEI Tecnai G2 20 X-TWIN electron microscope, equipped with emission source LaB6, CCD camera FEI Eagle 2K, and X-Ray microanalyzer EDX.
Results and Discussion
The main problem occurring during synthesis of functionalised SBA-15 mesoporous silica lies in the homogeneous distribution of functional groups.The key stage in the synthesis of this functional material is the functionalisation of precursor groups (propyl-phosphonic acid) by doping agents (Fe(acac) 2 ).The one-step synthesis of SBA-15 silica containing precursor groups is not treated as a critical point of the synthesis.The cocondensation method guarantees the homogeneity of distribution of precursor groups in the silica matrix.Nevertheless we are able to probe the key stage of the synthesis route by using of molecular spectroscopy methods: Mössbauer and Raman.
The Mössbauer spectra were recorded for two kinds of samples: mesoporous silica containing propyl-Fe-phosphate groups and Fe(acac) 2 as the doping agent.In the case of full activation (each precursor group activated by iron) the environment of Fe ions should be different in comparison with the environments in the doping agent, and the differences in the environments should be reflected in Mössbauer spectra.Both spectra (Figure 1) were compared taking into account isomer shifts (IS), quadrupole splitting (QS), line widths (Γ) (Table 1), and the asymmetry of the doublets.
The parameters collected in Table 1 seem not to vary significantly for both samples; however, the detailed analysis reveals some important differences.The IS and QS values can indicate the high-spin Fe 3+ oxidation state [20,21] or low-spin Fe 2+ state [22].The doping agent (Fe(acac) 2 ) was a commercial product purchased from Sigma-Aldrich as a high-purity compound in oxidation state 2+, so the most probable state of iron ions is Fe 2+ , = 0.The EPR research did not show any signal in the temperature range from 10 to 300 K, which can confirm that our sample contains ferrous ions with spin = 0.Both spectra show the relatively high values of the line widths.Such a broadening can arise mainly from relaxation effects or from isomer shift and quadrupole splitting distributions.In this paper we consider the quadrupole splitting distribution, with the assumption of linear dependence between the IS and QS values.The use of the distribution procedure applied in the Normos package which gave the QS distributions is shown in Figure 2.
We can see one intensive broad peak and a few smallintensity peaks for higher values of the quadrupole splitting.The latter small peaks can be connected with the relatively long "wings" in the Mössbauer spectra (Figure 1) and they do not have a physical meaning.The intensive broad peaks in the distributions (Figure 2) indicate that each sample contains iron ions with the same electron configuration (in this case: Fe 2+ , = 0) and continuously distributed quadrupole splittings caused by distribution of the electric field gradient (EFG) around Fe 2+ ions.Local distortions of bond lengths and angles can lead to wide range of local environments of Fe 2+ ions and next to the continuous distribution of quadrupole splittings.Different kinds of asymmetry observed in the spectra (Figures 1(a) and 1(b)) can be connected with varying values of the linear parameters in the relationship IS = (QS) for the mesoporous silica containing propyl-Fephosphate groups and Fe(acac) 2 samples.A slight shoulder observed in the distribution for SBA-propyl-POO 2 Fe spectrum (Figure 2(b)) indicates contribution of additional doublet from the doping agent Fe(acac) 2 .We tried to fit the SBApropyl-POO 2 Fe spectrum (Figure 1(b)) by two doublets and, taking into account the areas under the doublets, we found that contribution of the doping Fe(acac) 2 is negligible (less than 0.1%).It means that practically all propyl-phosphonic acid precursor groups were functionalised by the Fe(acac) 2 agents (two different surroundings of iron ions in samples).
The Raman spectroscopy supported by numerical simulations fully confirms the results presented above.The indepth analysis of this case was available in [19]; therefore we present only a brief overview of the results.Two samples were analyzed under this technique: SBA-15 mesoporous silica containing propyl-phosphonic acid groups (SBA-propyl-PO(OH) 2 ) as the reference and SBA-15 mesoporous silica containing propyl-iron-phosphate groups (SBA-propyl-POO 2 Fe).Both samples were obtained during the same synthesis route, but in other steps (SBA-15 containing propylphosphonic acid groups are a precursor of a SBA-15 containing propyl-Fe-phosphate groups).The DFT simulations for models of considered molecules enabled the precise identification of the vibrations origins.Characteristic Raman frequencies were found by comparison of theoretically calculated spectra (DFT method) and selecting peaks that are different for both samples.We expect that, for successful functionalisation, each propyl-phosphonic acid group will be transferred into propyl-iron-phosphate group.As a result we should find in the spectrum of SBA-propyl-PO(OH) 2 features originated from propyl-phosphonic acid groups, which are not present in the spectrum of SBA-prop-POO 2 Fe.And, vice versa, in the spectrum of SBA-propyl-POO 2 Fe we should find peaks created by the propyl-iron-phosphate groups vibration, which should be invisible for the SBA-propyl-PO(OH) 2 spectrum.The juxtaposition of the theoretical Raman spectra with the experimental ones for considered samples can be seen in Figure 3(a) (theoretically calculated) and in Figure 3(b) (experimental).
The most significant theoretical vibrations for SBApropyl-PO(OH) 2 are located within 3320-3380 cm −1 region while SBA-propyl-POO 2 Fe displays peak at 670 cm −1 .The first one comes from the O-H stretching modes of the phosphonic acid groups in SBA-propyl-PO(OH) 2 .The ironcontaining species vibration at 667 cm −1 can be assigned to deformation of the propyl-iron-phosphate unit.The analysis of these regions in experimental spectra can give information about the synthesis efficiency.
Indeed, in the experimental spectra of the ironcontaining specimen we can observe distinguishable peaks at 674 cm −1 coming from complex vibration of the propyl-Fe-phosphate group.For SBA-propyl-PO(OH) 2 the region within 640-1300 cm −1 seems to be flat.This indicates that iron is joined through the propyl-phosphate groups in the silica matrix.
In the case of SBA-propyl-PO(OH) 2 the well-resolved peak can be observed at about 3350 cm −1 .These vibrations can be assigned to stretching modes of the phosphonic acid groups, particularly, asymmetrical stretching of hydroxy units.The region above 3100 cm −1 did not contain any Raman bands in the case of the SBA-propyl-POO 2 Fe, which can prove the absence of the phosphonic acid groups in this sample.This is the result of the complete functionalisation; phosphonic acid groups are not present in the sample.
The quantitive EDX analysis was carried out for ironcontaining silica sample as a supplementary research, in order to confirm our previous results.The EDX spectrum is shown in Figure 4 and quantification results are presented in Table 2.
It is clearly seen that the SBA-prop-POO 2 Fe sample has almost assumed quantity of iron; the molar ratio of silicon to phosphorous and silicon to iron is equal to 10.707 and 11.815, respectively.This lead to a conclusion that almost each phosphonic acid group was activated by iron.Minor deficiency of iron is caused by propyl-phosphonic acid groups incorporated in silica walls.
Conclusion
The Mössbauer spectroscopy was applied to examine the functionalisation efficiency of the iron-containing mesoporous silica SBA-15.This SBA-15 containing propyl-ironphosphate was investigated in comparison with Fe(acac) 2 as the doping agent.Comparative analysis of the spectra and obtained parameters has shown that for both investigated samples only one electron configuration of ferrous ions can be observed (there is no excess of the doping agent inside iron-containing mesoporous samples).Mössbauer parameters indicate low-spin Fe 2+ ( = 0) for both samples.The low-spin state of ferrous ions was also confirmed in the EPR spectroscopy.The abovementioned results show that the activation process runs according to synthesis assumptions and phosphonic acid groups are activated to iron-phosphate.The success of the activation process was also confirmed by Raman scattering supported by numerical simulations and EDX elemental analysis.
Figure 3 :
Figure 3: The juxtaposition of the Raman spectra for silica containing propyl-phosphonic acid and propyl-iron-phosphonate units, theoretically calculated (a) and experimental (b).Main characteristic bands (bands appearing in only one species) are marked as coloured bands.
Figure 4 :
Figure 4: The EDX spectra of SBA-prop-POO 2 Fe sample.For better visibility, region between 2 and 6 keV (no peak observed) has been removed.
Table 1 :
Mössbauer parameters (in mm/s) derived from experimental spectra of the complexes studied. | 3,050.6 | 2016-03-01T00:00:00.000 | [
"Materials Science"
] |
A 3D-2D Convolutional Neural Network and Transfer Learning for Hyperspectral Image Classification
As one of the fast evolution of remote sensing and spectral imagery techniques, hyperspectral image (HSI) classification has attracted considerable attention in various fields, including land survey, resource monitoring, and among others. Nonetheless, due to a lack of distinctiveness in the hyperspectral pixels of separate classes, there is a recurrent inseparability obstacle in the primary space. Additionally, an open challenge stems from examining efficient techniques that can speedily classify and interpret the spectral-spatial data bands within a more precise computational time. Hence, in this work, we propose a 3D-2D convolutional neural network and transfer learning model where the early layers of the model exploit 3D convolutions to modeling spectral-spatial information. On top of it are 2D convolutional layers to handle semantic abstraction mainly. Toward simplicity and a highly modularized network for image classification, we leverage the ResNeXt-50 block for our model. Furthermore, improving the separability among classes and balance of the interclass and intraclass criteria, we engaged principal component analysis (PCA) for the best orthogonal vectors for representing information from HSIs before feeding to the network. The experimental result shows that our model can efficiently improve the hyperspectral imagery classification, including an instantaneous representation of the spectral-spatial information. Our model evaluation on five publicly available hyperspectral datasets, Indian Pines (IP), Pavia University Scene (PU), Salinas Scene (SA), Botswana (BS), and Kennedy Space Center (KSC), was performed with a high classification accuracy of 99.85%, 99.98%, 100%, 99.82%, and 99.71%, respectively. Quantitative results demonstrated that it outperformed several state-of-the-arts (SOTA), deep neural network-based approaches, and standard classifiers. Thus, it has provided more insight into hyperspectral image classification.
Introduction
Hyperspectral images (HSIs) have hundreds of spectral bands that comprise detailed spectral information. As a result, HSI images have formed the foundation for a wide range of applications, including precision agriculture [1], resource surveys [2], target identification [3], and landscape classification [4]. Because visual classification can aid in interpreting HSI image scenes, classification is an essential domain in HSI image processing [5,6]. However, high dimensionality, high nonlinearity, and an imbalance between the limited training samples of HSIs [7,8] affect classification accuracy and make HSI classification difficult.
To address the abovementioned challenges, dimensionality reduction (DR) [9][10][11][12] and semisupervised classification [13,14] approaches have been extensively adopted for HSIs. Generally, there are two classes of DR, i.e., the band selection and feature extraction [15]. Among them, feature extraction [16][17][18][19] minimizes computational complexity by projecting high-dimensional data into low-dimensional data space and feature selection [20] picks appropriate bands from the original set of spectral bands. Further, a sparsebased method [21] has been used to derive useful spectral features. Nevertheless, PCA seeks out the best orthogonal vectors for representing information from HSIs [22,23] with minimized spectral dimension (up to 85%). On the contrary, it improves the separability among classes, decreases, and brings a balance of the interclass and intraclass. erefore, we used PCA as an effective tool to transform the original features into a new space with reduced dimensionality and more excellent distinctive features.
Lately, a more innumerable center has been directed to the remote sensing (RS) study scope for HSI classification. However, the high-resolution features of HSI data make it challenging to understand and separate several land-cover classes, extract more major distinctive structures, and produce an unbiased HSI classification through the application of traditional machine learning (ML) approaches [24]. Nonetheless, the evolution of deep learning (DL) has exceptionally improved not only in RS but also in different research areas such as digital image processing (DIP), pattern recognition, segmentation, data classification, and object detection [25]. e tremendous progress in DL to analyze HSI [26] by many research works in the past years has somewhat solved the HSI classification problem through a proposed dual-path network (DPN). It combined two systems, specifically the dense-convolutional network and the residual network [27]. It engages an unsupervised greedy layer-wise training approach to interpret the RS images [28] for a pixel-block pair (PBP) exhibition. To find a solution for HSI classification, Song et al. [29] came up with a deep feature fusion network while Cheng et al. [30] adopted the off-the-shelf convolutional neural network (CNN) techniques. Li et al. [31] employed 3D-CNN, deep feature extraction for HSI classification. Mou et al. [32] considered an unsupervised model referred to as a deep residual conv-deconv network to resolve the HSI classification problem.
However, the rarity of identifying the HSI pixels of the separable classes is a repeated integrated obstacle in the original space. It is patent from this past research that singularly employing 2D-CNN or 3D-CNN has limitations, for instance, squandered band-related information or deeply intricate method. Additionally, it prevents the methods mentioned above from achieving outstanding accuracy.
e principal explanation is that HSI is volumetric data with spectral dimension. Using the 2D-CNN method alone cannot acquire helpful, distinctive feature maps from the spectral interpretations. Likewise, a deep 3D-CNN method is computationally costly. It performs poorly for classes of similar features over several spectral channels when used alone. In addition, the methods take more computational time to analyze and interpret the spectral-spatial data cubes. erefore, we proposed a 3D-2D convolutional neural network and transfer learning model embedded in ResNeXt-50 with consecutive feature learning blocks based on the challenges mentioned above. Our approach takes the spectral-spatial features of HSI into account for classification. It achieves a brief description of the spectral-spatial data and enhanced computational efficiency as defined: We propose a 3D-2D convolutional neural network and transfer learning model that utilizes 3D convolutions to modeling spectral-spatial information in the early network layers of the model and the 2D convolutions on top to exceptionally deal with semantic abstraction. e network leverage convolutional blocks of the ResNeXt-50 model before the flatten layer to further enhance the performance. We applied regularization techniques to avoid overfitting during fine-tuning. We engaged an optimizer with a prolonged learning rate with a dropout of 0.5/ 0.055 and early stopping in the training process. Adam is a good choice for the process as opposed to methods such as stochastic gradient descent (SGD). We evaluated our proposed model on five sets of publicly available HSI data. Our proposed model delivers swift spectral-spatial representation, enhances computational efficiency, and validates more understanding of the 3D spectral-spatial hyperspectral imagery classification. e rest of our paper is organized as follows; Section 2 gives the related works on HSI classification. en, Section 3 describes the proposed approach in detail. Section 4 presents extensive experiment; finally, the conclusion is presented in Section 5.
Related Work
Recently, CNNs have been implemented by a manifold of researchers; for example, Zhang et al. [33,34] implemented a CNN model for the HSI classification. e work acquired the spatial features through a 2D-CNN approach by utilizing the original HSI image's first insufficient principal component channels. Using 2D-CNN in HSI comes with various advantages: a principled way to acquire features instantly from the original input images. It has shown tremendous promise in image processing and computer vision, with applications such as object detection [35] and image classification [36]. Nonetheless, the immediate deployment of 2D-CNN to HSI images necessitates the convolution of individual inputs of the 2D networks, in addition to each group of learnable kernels. Frequently, a substantial amount of bands with the spectral dimension of the HSI image requires a vital number of parameters, which may be subject to overfitting and a risen computational cost.
Preceding articles acknowledge that 2D-CNN has achieved incredible outcomes in visual data processing areas such as image classification [37], face detection [38], depth estimation [35,39], and object detection [40]. Nevertheless, using 2D-CNN in the investigation of HSI points to the failure to catch channel-related information. Accordingly, using 2D-CNN entirely has no capacity for extracting valuable features of the spectral dimension. In addition, the 2D-CNNs, when deployed alone, hinder them from achieving more reliable accuracy on HSI.
An enhanced spatial dimension of HSIs helps supply multiple low-level features, combining exhaustive spatial information. In contrast, the spectral features present [43] implemented a 3D deep learning framework for spectralspatial features classification. To extract the spatial-spectral features undeviatingly from the original HSI image, Mei et al. [44], introduced a 3D CNNs approach that exhibited boosting classification outcomes. Li et al. [45] extended their investigations of 3D-CNN to classify spectral-spatial with the use of 3D input cubes with small spatial dimensions. eir techniques produce thematic classification maps employing an approach that can process original HSIs directly. However, the CNN method drops in precision as the network deepens.
Li et al. [46] further explain that HSI imagery combines several adjacent bands or channels with affluence of spectral signatures, hence, the distinguishing of different elements through discrete spectral discrepancies. However, these spectral bands are closely correlated and incorporate considerable redundant information due to a huge volume of the raw spectral bands and the spatial resolution, henceforward, the difficulty in discriminating the landcover classes [47]. Additionally, the key enigma entails extracting the discriminative features of the HSI data to reduce the set of important bands [48]. In a different outline, the HSI data generally takes a 3D cube form. e 3D convolution in spectral-spatial dimensions frequently contributes towards an effective approach that empowers a concurrent extraction of the detailed features in such images. Studying the information, numerous authors have implemented a 3D-CNN method to purposely extract the deep spectral-spatial [18,30,36,42,43,45,49,50]. Works by Song et al. [29], Mou et al. [32], Zhong et al. [43], and Paoletti et al. [51] exhibited extensive network residual learning (RL) models to extract additional discriminative characteristics for HSI classification. More advanced investigations on HSI classification point to significant enhancement by fusing spatial features toward classifiers [52]. Although the 3D-CNN architectures are manageable and can deduce the spectral and spatial information from HSI data while accomplishing more reliable accuracy, they are computationally expensive to be uniquely employed in HSI analysis. On the contrary, when deployed alone, it hinders them from achieving more reliable accuracy on HSIs. It is essential to merge the learned spatial features with the spectral features captured by feature extraction methods for reliable HSI classification.
Melgani and Bruzzone [53] introduced a support vector machine (SVM) technique with diverse classifiers to evaluate their potentials. Makantasis et al. [19] proposed deep learning that envisions high-level features automatically in a hierarchical order to encode spatial information and pixels' spectral for classification. ey engaged a 3D DL method that facilitated spectral and spatial information and then induced a basis for solving RS data noise.
e method subsequently classified the information employing a multilayer perceptron. However, the method only considered spatial features for HSI classification. A multiscale 3D deep CNN (M3D-DCNN) of 5 layers is proposed for similar work [54]. e model concurrently learns 2D multiscale spatial features and 1D spectral features from HSI data in an end-to-end approach. us, it jointly extracts both the multiscale spatial feature and the spectral feature. Moreover, the model lacks features aggregation, which affected classification performance.
Zhong et al. built a spectral-spatial residual network (SSRN) model that manipulates the 3D raw data cubes for HSI classification [43]. It uses identity mapping to concatenate 3D convolutional layers via residual blocks for backpropagated gradients. Using hybrid spectral CNN (HybridSN), Roy et al. [55] achieved a better classification accuracy. e model combines the corresponding spectral and spatio-spectral data in the 3D and 2D convolution forms, respectively. Although the model achieved high Computational Intelligence and Neuroscience accuracy, it maintains many parameters likened to the SSRN model; simultaneously, it takes a long to train. In this context, our system shares the same skeleton system architecture as Roy et al. [55], except for the convolved 2D input kernels. Instead of a single 2D layer, we leverage five (5) convolutional blocks of the ResNeXt-50 model starting from the layer block with filter 128 before the flatten layer to handle semantic abstraction. We freeze the layers from the 3rd block before training. is practice strongly discriminates the spatial information within different spectral bands without substantial loss of spectral information. e experimental result shows that the approach improves the computational efficiency, classification accuracy, and instantaneous representation of the spectral-spatial information compared to SOTA methods such as SVM [53], 2D-CNN [19], 3D-CNN [42], M3D-CNN [54], SSRN [43], and HybridSN [55] that have deployed the hyperspectral remote sensing images as the experimental datasets.
A 3D-2D Convolutional Neural Network and Transfer
Learning Model. Figure 1 illustrates the general diagram of our proposed method for hyperspectral image classification. e proposed 3D-2D convolutional neural network and transfer learning model (3D-2D-CNNTL) model mimics the design architecture of HybridSN but differs in implementation. It fuses both 3D and 2D-CNN layers to obtain the spectral features encoded in a manifold of bands with spatial information. e 3D-CNN learns an abstract level spectral-spatial representation and the 2D-CNN network for spatial feature learning. We then leverage convolutional blocks of the ResNeXt-50 model before layer flatten. ResNeXt-50 blocks are deep residual networks with cardinality that utilizes the split-transform-merge method. Results are seen in branching paths within a cell to transform the residual block. e output from the ResNeXt-50 block concatenated with the skip connection path resulting in an orthogonal increase in the depth of the residual networks [56]. e ResNeXt-50 block is represented as where y is the output, x represents the input of the preceding network layer, C denotes the cardinality, and τ i is the (1) Input: Computational Intelligence and Neuroscience arbitrary function that projects x into low-dimensional embedding and transforming it. e proposed model network concatenated with ResNeXt-50 as the base model is shown in Figure 2. (2), we took the input image as the spectral-spatial hyperspectral data cube represented by
Hyperspectral Input Image. As shown in equation
where I denotes the HSI input image, W denotes the width, while H denotes the height, and N signifies the value of spectral bands. Each spectral-spatial image pixel in I consist of N spectral measures which formulate to a label vector expressed as where L in this space represents the land-cover categories.
Dimensionality Reduction.
PCA is an unsupervised feature technique for feature extraction used to derive orthogonal features from a dataset and decrease the feature space's dimensionality. We applied PCA for dimensionality reduction at the first I, beside the spectral channels, to eliminate spectral redundancy and dataset Computational Intelligence and Neuroscience 5 imbalance. is redundancy is caused by high intraclass variability and interclass similarity due to different landcover classes represented by the spectral-spatial HSI pixel.
To identify the object in its original class, the PCA helps to decrease spectral bands, i.e., from N to S but conserved W and height H at the exact spatial dimensions, as shown in the equation below: where P denotes the transformed HSI input after applying PCA. We then divided the spectral-spatial data cubes into small overlapping 3D patches Q ∈ R S×S×N from P, where S × S represents the width and height of the covering window size. Finally, the central pixel of the class label at the spatial location (α, β) decides the truth labels. e 3D patches (n) from S takes expression e 3D patch at the position (α, β), represented by Q (α,β) , thus represents the width from (α − (S − 1)/2) to (α + (S − 1)/2), height with the entire N spectral bands of PCA decomposed data cubes P. Figure 3 delineates the process of dimensionality reduction.
ere are four primary steps in PCA as the pseudocode for each computing step is supplied in Algorithm 1. e data volume is first relocated to a new location to be recentered around the reference origin region. e mean value of each spectral band is computed and removed during data preprocessing (see step 2 of Algorithm 1). Second, the data volume's covariance matrix is calculated as the product of the preprocessed data matrix and its transpose (step 3). e related eigenvectors of the covariance matrix are then retrieved (step 4). Each pixel of the original image is projected into a subset of eigenvectors (steps 5 and 6), which produce a reduced dimensionality.
We can get a reduced dataset from the original highdimensional dataset by following these steps, which is the primary goal of the PCA technique. Finally, the explained variance ratio given by a principal component is the balance between the variance of that principal component and the total variance. e explained variance ratio was nearly 75% for the five dataset samples.
e Spectral-Spatial Feature Learning.
To generate the feature maps of the convolution layer from the spectral-spatial features and capture the spectral information, we applied the 3D kernel over a manifold of adjacent HSI channels in the input layer in our suggested model for the HSI dataset. e 3D convolution network at a spatial point (x, y, z), which denotes the activation value at the j th feature map of the i th network layer of the proposed model, is designated as v x,y,z i,j and produced through the following expression: where ϕ represents the activation function, the bias constraint is denoted by b i,j , d l− 1 signifies the value of feature map in l − 1 th network layer, 2c + 1 represents kernel's width, 2δ + 1 is the height of kernel, the depth of the kernel is represented by 2η + 1 along the spectral dimension, and w i,j represents the number of weight constraint of i th network layer for the j th feature map. We applied a supervised approach [36] to train the constraints of bias represented by (b) and the kernel weight represented by (w) through gradient descent. Eventually, a spectral-spatial feature representation is taken concurrently from the HSI by the 3-D-CNN kernel, whereby the computational expense remains complex. To achieve the convolution of the network, we estimated the summation of products of the two corresponding dot products. ese products are the HSI input and the kernel spatial dimensions. Lastly, we include the entire feature maps of the last network layer of the model. e activation function value in 2D convolution at (x, y) denotes the spatial point of the i th network layer for the j th feature map represented by v x,y i,j and generated using the in-text equation: where ϕ in the equation represents the activation function, b i,j denotes bias constraint, d j− 1 signifies the value of feature map in l − 1 th network layer, and w i,j represents the width of the kernel all designed for the i th network layers for the j th feature maps. A 3D convolution is produced via concatenating a 3D kernel with 3D data. Roy et al. [55] employed a 3D kernel over a manifold of adjoining bands and channels in the input layer to obtain the spectral features to generate a feature map layer. We employed similar 3D for the first three layers in Computational Intelligence and Neuroscience our model. Triple 3D convolutions (Conv3D) are applied to preserve the spectral features for the input data. is helps the amount of spectral-spatial (SS) feature maps to increase within the output dimensions simultaneously. We engaged 3D convolutional blocks with filters; 8, 16, and 32 in the first, second, and third convolution layers. e Conv3D and Max-Pooling kernel size is z × z × h, that is, z � kernel spatial size and h � the kernel depth. Conv_layer1 . e output layer is then reshaped to take a 2D form, i.e., the 4th and 5th 2D convolution (Conv2D) and max-pooling kernel size of z × z and stride � 2. We leveraged five convolutional blocks of the ResNeXt-50 model starting from the layer block with filter 128 before the flatten layer, where we freeze the layers from the third block before training. is practice actively discriminates the spatial information within distinct spectral channels without losing any important spectral information.
e ResNeXt-50 block (bottleneck layer) further learns deep spatial encoded features when transforming from 3D to 2D before the FCs' layers to significantly condense the input feature maps and accelerate the training speed. en, the output is downsized (flattened) before assigning it into the FC layers that produce the land-cover class possibilities via a softmax loss layer l 0 expressed as where j represents the number of class labels, p represents the mini-batch size, and q i and r i represent the i th label probability distribution vector and the ground truth (GT) label in the mini-batch, respectively. e average is computed on the sum result from the whole mini-batch pixels. e weights were not significantly changed during the fine-tuning stage, as the ResNeXt-50 model is already good. We employed the Adam optimizer with a learning rate of 0.001 and a weight decay of 1e − 06. Usually, the Adam is appropriate for this instead of the SGD optimizer. Whenever the number of training samples is small, it occasionally triggers overfitting. Hence, we adopted early stopping with dropout regularization techniques to combat overfitting and improve generalization error. We used a dropout of 0.50 for IP, PU, SA, and KSC datasets and 0.55 for BS due to the sampled size. We considered the early stopping criterion to quickly stop the training whenever the performance on the validation set detriments and ensures convergence. erefore, this pattern is factored during the training process to minimize the computation complexity without detrimental classification accuracy. We run each experiment for 100 epochs after estimating the number of components to 75. e batch sizes were set as 25 Table 1 for a summary of all layer types, output map dimensions, and the number of parameters used in our proposed model for each dataset.
To solve the quicker convergence of the model, we adopted the ReLUs' activation function. It tends to be faster training convergence than other saturating activation functions. e ReLU also enhances the model's effectiveness to represent complex functions and facilitates optimization, yielding lower training and testing losses and is formulated as 3.5. Evaluation Indexes. We use three evaluation metrics, overall accuracy (OA), Kappa coefficient (Kappa), and average accuracy (AA), to estimate the model performance. e OA and AA metrics describe the average exactness of is helps confirm the precise number of samples correctly classified from the test set. e Kappa coefficient is used as a numerical determination metric to reciprocate information. It helps verify a resilient concurrence based on the ground truth and the classification mapping. See equations (10)-(12) for evaluation indexes.
Kappa Coefficient (K)
where P 0 � P ii is the summation of the relative frequency in the diagonal of the actual error and P c � P i+ P +j is the relative frequency of random allocation equivalent to the Computational Intelligence and Neuroscience 13 chance of agreement. ("i+" and "+j") represents the relative marginal frequencies.
e Overall Accuracy (OA)
OA � CC T , (11) where CC represents accurately predicted samples in relation to the ground truth. T is all samples of either the ground truth or predicted values.
e Average Accuracy (AA).
e average accuracy of our model performance is given by where c is the number of classes and x indicates the percentage of correctly classified pixels in a single class.
Data Preprocessing.
We processed different publicly available remote sensing datasets [57] to determine the performance of our proposed model. e dataset includes Indian Pines (IP), Pavia University Scene (PU), Salinas Scene (SA), Kennedy Space Center (KSC), and Botswana (BS). Table 2 summarized the description of each dataset used. Training Accuracy 14 Computational Intelligence and Neuroscience We split the labeled samples randomly into 30% and 10% training set size and 70% and 90% as a test to conduct our experiments, ensuring the inclusion of all classes. Also, we conducted statistical normalization of all the data to zeros and ones-mean (μ � 0) and unit as the variance (σ � 1). To measure the volatility of the model, we expressed the classification accuracies using mean ( ± ) standard deviationbased statistics.
We carried a set of experiments to present the effectiveness and superiority of our model. We compared our results with the SOTA, methods such as SVM [53], 2D-CNN [19], 3D-CNN [42], M3D-CNN [54], SSRN [43], and HybridSN [55]. e model obtained a very satisfying performance classification accuracy as compared to the cited methods. In our first experiment, we used 30% of the training samples to determine the best parameters of our model. e results outlined in Tables 3-5 highlight the best classification accuracy for individual classes using catego-rical_crossentropy as a loss function.
Per-Class Accuracy on the Indian Pines (IP) Dataset.
As we can see from Table 3, our proposed model's performance gives the highest score in 10 out of 16 classes on the IP dataset comparing to the methods listed. Figure 4(a) illustrates the false-color map, Figure 4 little higher percentage superior to SSRN and HybridSN methods. Our model has a smooth and accurate classification compared to other SOTA models. See red "+" on the class labels such as alfalfa, corn-no till, corn, grass-pasture, grass-trees, grass-pasture-mowed, soybean-min till, soybean-clean, wheat, buildings-grass-trees-drives, and stonesteel-towers. Figure 5 shows our proposed model's accuracy and loss convergence with 100 epochs on a 30% train set of the IP dataset.
Per-Class Accuracy on Pavia University Scene (PU)
Dataset. Table 4 presents the classification results for the PU dataset. In terms of class accuracy, the class "Shadows" happens to be the most challenging to be correctly classified.
Our model still exhibits the best accuracy for this class. Figure 6(a) portrays the false-color map, Figure 6(b) the reference ground truth map, and Figures 6(c)-6(i) are classification maps for the PU dataset employing SVM, 2D-CNN, 3D-CNN, M3D-CNN, SSRN, HybridSN, and our proposed model, respectively. Although the quality of the classification map of SSRN, HybridSN is better, and our model comparatively has a small percentage increment superior to SSRN and HybridSN methods. Our model has a precise and accurate classification compared to other methods with red "+" on the trees, bare soil, and selfblocking bricks class labels. Also, see Figure 7 for the accuracy and loss convergence of our proposed model for 100 epochs on the PU dataset, demonstrating computational effectiveness with significant convergence at approximately 30 epochs.
Per-Class Accuracy on the Salinas Scene (SA) Dataset.
e classification accuracy for the SA dataset is shown in Table 5. We trained the model by adopting the Adam optimizer and maintaining a learning rate of 0.001 and 0.50 dropout. It outperforms all other methods, and it has the same performance as HybridSN, however, better in computational efficiency. Figure 8(a) portrays the false-color map, Figure 8(b) the reference ground truth map, and Figures 8(c)-8(i) are classification maps for the SA dataset using SVM, 2D-CNN, 3D-CNN, M3D-CNN, SSRN, HybridSN, and our proposed model, respectively. e quality of the classification map is still comparatively better with our model, with a significant percentage surpassing the SSRN and HybridSN models. Also, our model has a distinct and correct classification with no ambiguity in the class label. Other SOTA methods with red "+" on the class labels depict misclassification. ese labels are Fallow_rough_plow, Corn_senesced_-green_weeds, and Vinyard_untrained. Figure 9 gives the accuracy and loss convergence of the train set on the SA dataset with 100 epochs of our proposed model. e model converges at approximately 40 epochs, confirming that our model delivers high computation efficiency using 30% of the train set.
With 30% train data, we can conclude that our model outperformed other SOTA models. Notably, we compared our model with the HybridSN [46] method using 30% of the available labeled samples in the KSC and BS datasets as the training set. e BS dataset requires further study on the application of HSI models as it is characterized by low spatial resolution multispectral satellite images. Table 7 shows the per-class accuracy achieved on 30% of the training set on the KSC dataset. e bold points emphasize the best of our model compared to the HybridSN model.
As shown in Figure 10, our model's training accuracy and loss convergence after 100 epochs engaging 30% of the BS data as a training set. e model converges at almost 50 epochs, verifying quick feature learning of our model. Table 8 presents the overall accuracy performance regarding OA, Kappa, and AA for classic classifiers and deep neural network models. Our model achieves competing accuracy across the three datasets (IP, PU, and SA), maintaining a minimum standard deviation across all the experiments consecutively. is is due to a sequential representation of spectral-spatial 3D-CNN and a spatial 2D-CNN, succeeded by ResNeXt-50 for feature extraction.
From Table 8, our model outperforms SVM in terms of OA, Kappa, and AA with 14.55, 16.73, and 20.73 percentage points, respectively, on the IP dataset. Additionally, it yielded better classification results than the 2D-CNN, 3D-CNN, M3D-CNN, SSRN, and HybridSN with an OA, Kappa, and AA accuracies of 99.85%, 99.83%, and 99.76%, respectively. Figures 11(a)-11(c) sequentially represent an absolute confusion matrix highlighting the proposed model's performance on 30% training samples of the IP, UP, and SA datasets. We recognize that relatively great diagonal values with different colors are situated across the central diagonal of the entire matrices. is signifies that our model significantly decreases the misclassifications of class labels, with many of the classes precisely predicted, producing a more related map regarding the ground truth. Table 9 demonstrates the results of our proposed model with various SOTA methods on IP, PU, and SA with 10% of the training set. Our model achieves higher classification accuracy in all considered HSI scenes. e overall accuracy (OA), respectively, mounted to 98.78%, 99.80%, and 99.99% on IP, PU, and SA datasets. Hence, proving our proposed model is somewhat better to the SOTA methods in nearly all states, while maintaining the least standard deviation. Figures 12-14 emphasizes the training accuracy and loss for our proposed model, and Figure 15 illustrates the confusion matrix of the three datasets, i.e., IP, PU, and SA. Table 10 presents the execution time on the IP, PU, and SA datasets with spectral-spatial SOTA methods. e execution time is based on the GPU computational training time (m) and testing time (s). We can conclude that our model outperforms the other spectral-spatial models in training and test time. is is due to early stopping, accuracy monitoring, and adopted regularization technique during the training process that helps minimize computational complexity, while steadily maintaining classification performance.
We ran this on MacBook Pro (Retina, macOS Catalina, and processor: 2.3 GHz Quad-Core Intel Core i7, 8 GB 1600 MHz DDR3-NVIDIA GeForce GT 650M (Memory), and Software: Python and Google Colaboratory ltd., with 1 GPU acceleration mode and 25.7 GB RAM.
Conclusion
is work extends the HybridSN model by proposing a 3D-2D convolutional neural network and transfer learning model for the HSI classification. We introduced a bottleneck layer (ResNeXt-50) in our model to drastically decrease the number of parameters. is helps minimize the computational time than the HybridSN model, while steadily maintaining classification performance. To combat overfitting, we employ early stopping with dropout regularization techniques. e advantage of our 3D-2D convolutional neural network and transfer learning model is the ability to perform highly in a spectral-spatial way. Experiments with five diverse HSI datasets prove that our proposed model did exceptionally well and showed effectiveness. It outperforms the SOTA approaches; hence, it confirms more understanding of the 3D spectral-spatial HSI classification. However, we only trained a few datasets on our model. We recommend future works to consider additional datasets for training and testing our model and implementing them to deep learning methods in HSI classification.
Data Availability e data that support the findings of this study are openly available in Hyperspectral Remote Sensing Scenes at http://www.ehu.eus/ccwintco/index.php/ Hyperspectral_Remote_Sensing_Scenes.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,368.2 | 2021-08-21T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Comparison of conventional varnishes with nanolacke UV varnish with respect to hardness and adhesion durability.
The long-term durability of varnished wooden surfaces used in either indoors or outdoors environments depends on the resistance of varnish layers on these surfaces against potential physical, mechanical and chemical effects to which they may be exposed. In this study, "Nanolacke ultraviolet varnish", developed by a Turkish dying and varnish industry company and widely accepted as a 21(st) century technology has been compared to other conventional varnish systems widely used in the industry in terms of dry film resistance properties. In this study, cellulosic, polyurethane, polyester, synthetic and Nanolacke ultraviolet varnish have been applied on beech (Fagus orientalis L.) and oak (Quercus robur L.) wood samples which had been prepared according to the industry standards. Then, the hardness and adhesion resistance of these layers have been determined according to ASTM D 4366 and ASTM D 3359-2 standards, respectively.
Introduction
Wood is a natural continuously renewable material, which, although nowadays there are many alternatives for it, over the centuries has never lost importance due to its superior properties. Among the various surface processes applied to wood, varnishes are used to show its beauty, colour and the wood pattern resulting from its nature along a film layer. After the application of surface treatments performed according to industry standards the technical, aesthetical and economical value of wood increases. Varnishes and varnish systems used in wood surfaces have varied and developed over time, as a result of quality demands and environmental protection consciousness. The newest area of development in this field corresponds to the nanotechnology product varnishes discussed herein.
Compano and Hullman [1] indicated that nanotechnology would become one of the most important technologies of this century. In the next ten years, this technology might generate profits of several hundred billion Euros. Governmental institutes and research companies from all over the world are competing to establish their positions in this new technology.
It is expected that nanotechnology based varnishes will replace with conventional varnishes thanks to their outstanding properties stated briefly below: • Limited or zero solvent pollution.
• Better product quality.
• Lower capital costs compared to thermal curing.
• Lower maintenance costs.
• Excellent process control.
• After curing the wood is immediately ready for other processing steps (cutting, joining with other materials, etc.). Nanotechnology gives researchers the opprtunity to change the structure of the materials on the molecular scale. For example, different gas and water permeability levels can be obtained. Addition of nano-molecules to the products can provide resistance to light and flame, mechanical strength, better thermal performance and resistance to gases [2].
The aim of this study is to compare the hardness and adhesion resistance properties of newly developed nanolacke UV varnishes, which have been developed for wooden furniture and parquet surfaces, to conventional varnish systems. Furthermore, by using different tree species, it will be determined whether the use of different species of trees has any effect on hardness and adhesion resistance.
Literature review
According to Wood et al. [3], nanotechnology will produce economic and social impacts on three broad timescales. Current applications of nanotechnology are largely the result of incremental advances in already well-established branches of applied science, such as material science and colloid technology. Medium term applications of nanotechnology will apply principles only now being established in the laboratory to overcome foreseeable barriers to continued technological progress. In the long term, entirely new applications may emerge from developments that are currently only anticipated in the laboratory.
For paint and varnish layers, hardness is an important indicator of resistance to physical and mechanical effects. Pendulum damping test apparatus can be used for measuring the hardness of paints and varnishes [4].
Uysal et al. studied the effects of chemicals used for the bleaching of the wood surfaces on the layer hardness of varnishes. They indicated that in the natural varnishing process the effects of the wood species on the layer hardness of varnish are unimportant, but the effects of varnish types are important. In the varnishing process, after bleaching the different wood types, bleaching chemicals and their concentration and varnish kinds affected the hardness of the varnish layer [5].
Ors and Atar reported that the hardness of varnish layers was not affected by impregnation and bleaching materials, but the hardness of wooden materials was increased by impregnation materials. Solvent groups however, decreased the hardness. It was concluded that synthetic varnishes were found suitable for use after bleaching and impregnation processes [6].
Steinfeldt et al. carried out a case study on the ecological efficiency of Nano-Varnish. In this study the practical investigation of ecological effects was implemented by comparing a nano-based varnish to a water-based varnish, a solvent-based varnish and a powder varnish. According to their results on varnishing an aluminium car surface they claimed that the advantages of using nano-varnishes are most visible in the VOC-emission levels. The VOC-emissions of the nano-varnish are 65 per cent lower than those of other varnishes [7].
According to Ors et al., with regards to surface hardness borax and acrylic impregnation of oak was superior to oak impregnated with borax and synthetic varnish. Therefore, it was concluded that boron compounds increase the surface hardness of the varnished wood [8].
Budakci and Atar studied the effect of outdoor conditions on the hardness characteristics of bleached Scotch Pine wood. They indicated that better results were obtained with bleaching process, and this method can be used for the restoration of wooden materials exposed to outdoor conditions [9].
Wood materials
Beech and oak were selected as the wooden materials for our experiments. Two significant factors were taken into consideration when choosing these species. The first was that these species are widely used in the furniture and parquet sectors, where most varnishes are consumed in Turkey. The second one was that they represent different anatomical structures. Beech was chosen to represent diffuse porous trees and oak was chosen to represent ring porous trees.
Oriental beech (Fagus orientalis Lipsky) and oak (Quercus petreae Lipsky) were randomly obtained from timber merchants in Istanbul, Turkey. Special emphasis was given to the selection of the wood material (lumber). Accordingly, no-defects, suitable, knotless, normally grown wood materials (without zone lines, without reaction wood and without decay, insect, or fungal infections) were selected, according to the TS 2470 standard [10].
Varnishes
Cellulosic, synthetic, polyurethane, polyester and nanolacke UV varnishes were used according to the producer's instructions. The type, selection, preparation and surface application system of the varnish to be used and the post-application processes as recommended by the manufacturers and the techniques used are very important to make varnish layers durable against various effects and to ensure the desired properties. Therefore, materials used in the experiments (tests) were stored appropriately until their usage to prevent loss of properties. Varnishes were checked to confirm they had the properties specified in their descriptions and they were used after seeing that they were appropriate for the tests (viscosity control). The technical specifications of the conventional varnishes are given in Table 1.
Nanolacke UV varnish
Nanolacke UV varnish is a type of polyacrylic based varnish that includes nanosilica based nanominerals and uses nanocomposite ultraviolet curing. Organic and mineral oxides obtained through solgel technology and formed in the varnish provide a flexible and unscratchable structure thanks to the formation of three dimensional networks, the flexibility provided by the organic material and the resistance provided by mineral oxides. The three dimensional network formed can be seen on the SEM photo in Figure 1. Despite the high silica content, a luminous and transparent film is obtained [11]. This is a patented product, which has been developed by the DYO Company for the parquet and furniture industry. The mapping of the element Si in this cross section shows a good dispersion of the SiO 2 particles (no big aggregates, clusters, etc.). The individual SiO 2 particles can be clearly seen. They appear to be in contact with each other to form a three dimensional networks made of small grape-like clusters and chain-like segments (Figure 2).
Product application
The Nanolacke UV varnish system is ready to apply right out of the package. It is applied at a rate of 5-20 gram per m² (depending on the surface required) through pouring or pumping into a cylinder machine. The surfaces on which UV varnish system has been applied are dried in 1-3 seconds by passing under lamps having 2-3 x 80 W power on UV belts working at conveying speeds of 5-20 meter/minute.
Preparation of the test specimens
Wood samples were randomly selected from the materials described above. The rough drafts for the preparation of test and control specimens (massive panels) were cut from the sapwood parts of massive woods with dimensions of 190 X 140 X 15 mm 3 and conditioned at 20 ± 2°C and 65 ± 3% relative humidity until a 12% humidity distribution was reached, in accordance with ASTM D 358 [12]. Conditioned specimens with dimensions of 100 X 100 X 10 mm 3 were cut from the drafts for varnishing. The conditioned panels were sanded prior to varnishing in order to obtain smooth surfaces. Test specimens were varnished according to ASTM D 3023 [13]. The producer's instructions were taken into account for the composition of the solvent and hardener ratio. One or two finishing layers were applied after the filling layer. The spray nozzle distance and pressure were adjusted according to the producer's instructions and moved in parallel to the specimen surface at a distance of 20 cm. Varnishing was performed at 20 ± 2°C and 65 ± 3% humidity.
Hardness measurements
These measurements were performed after the varnish coating; the test samples were conditioned at 23 ± 2°C and 50 ± 5% relative humidity for 16 h. The hardness measurements of the varnished surfaces were taken according to ASTM D 4366 [14] with a pendulum damping test. The test device determined the layer hardness by means of the swing of a pendulum. The pendulum had marbles 5 ± 0.0005 mm in diameter with a Rockwell conventional hardness value of 63 ± 3.3. The amount of the swing was directly proportional to the surface hardness.
Adhesion measurements
The adhesion resistances of the varnish layers have been determined according to the ASTM D 3359-2 standard [15]. The measurements were performed after the varnish coating; the test samples were conditioned at 23 ± 2°C and 50 ± 5% relative humidity for 48 h. Tests have been repeated on three different samples of the same varnish system and on three different regions of each sample. Test regions were been inspected by using a light source and a magnifier. Results have been determined according to the tables given in related standards for Method A and B.
Data analysis
Through the combined use of two different species of wood and five types of varnishes, a total of 100 specimens (2x5x10) were prepared, with 10 specimens for each parameter. Data were analyzed using ANOVA and Tukey HSD tests. All statistical calculations were based on 95% confidence level.
Surface Hardness
As a result of the statistical evaluations, as seen in Table 2, it has been determined that there are significant and meaningful differences (p<0:05) between the various varnish systems applied on both beech and oak samples in terms of surface hardness. Thus, nanolacke UV varnish showed the highest hardness value among other varnish systems for samples of both species. Nanolacke UV varnish was followed by polyester, polyurethane and cellulosic varnish, respectively.
As it can be seen from the Table 2, differentiation among tree species had no statistically significant and meaningful effect on the surface hardness of conventional varnish systems. In similar studies [16,17], it was also reported that differences between tree species had no effect on either the hardness of varnish or dye layers, instead the primary effect is either the type of varnish or dye. Furthermore, in Ozen and Sonmez's study, the best result was obtained by polyester varnish, which was followed by polyurethane, cellulosic and synthetic varnish in terms of their surface hardness. These results support our results from this study.
Only in the systems where nanolacke UV varnish was employed are there slight but meaningful differences between the surface hardnesses of beech and oak samples. This suggests that the anatomical structures of tree species may affect varnish layer hardness in the Nanolacke varnish system. The varnish layer hardness of beech samples is more than that of the oak samples, as can be seen at Table 2. It is thought that this difference, which is not observed with conventional varnishes, resulted from a stronger interaction between varnish and the tree species in the new varnish system.
Adhesion
According to the ASTM D 3359-2 standard, dry film thicknesses were determined before the tests ( Table 3). The thickness of the varnish layers were measured with a comparator, which has a sensitivity of 5 μm. Since film thicknesses of varnishes, except polyester varnish, are smaller than 125 μm "Test method A" was applied to those varnishes. Test method B was applied to polyester varnish. In the evaluation after the tests are performed the best result with respect to adhesion resistance (5A) was obtained from nanolacke UV, followed by polyurethane and cellulosic varnish. Synthetic varnish (3A) followed these ones. Polyester varnish (2B) gave the worst result among the others (Table 3). The most important factors affecting the durability of a varnish layer are the binding force between the varnish layer and the tree surface (adhesion) and the internal binding force among the molecules of varnish (cohesion). The consistency of a varnish layer with respect to shape changes depends mostly on its elasticity.
According to the test results, nanolacke UV and polyurethane varnishes are the ones which showed the least damage during cutting and in other phases of the tests and gave the best results in terms of both adhesion and cohesion forces and elasticity.
Cutting of varnish shavings in the form of thin fibers during cutting of the synthetic varnish layers showed that both cohesion and elasticity of this varnish are high. However, separation of the varnish layer from the wooden surface when the test band was detached showed that the adhesion force of this varnish is weak.
In polyester varnish layers the case is just the opposite of synthetic varnish. While chips and coarse varnish ruptures were seen in cutting regions, there wasn't any wood rupture in the layer. So, it can be said that cohesion force is high but the elasticity is low and adhesion force is sufficient in the polyester layers.
Conclusions
As a result of the study, it can be concluded that using different species of trees in conventional varnish systems doesn't have any significant effect on varnish layer hardness. However, when using the nanolacke varnish system the use of different species of trees affects varnish layer hardness. There are a lot of factors that may cause this difference between beech (diffuse porous) and oak (ring porous) species (intensity, cell structure, basic and secondary compounds of wood, texture, extractive substances, etc.). Further research is suggested to elucidate the factor(s) that may cause this difference. Moreover, another result obtained is that using different tree species doesn't have an important effect on the bonding strength of varnish layers.
It has been determined that there are significant differences among varnish systems. Accordingly nanolacke UV gave the highest hardness value, followed by polyurethane, cellulosic and synthetic varnish, respectively. According to their adhesion resistances, nanolacke, polyurethane and cellulosic varnish gave the best results (5A). These were followed by synthetic varnish (3A). Polyester varnish, on the other hand, showed the lowest adhesion resistance (2B).
Long-term durability of varnishes applied on wooden surfaces like furniture, parquet etc. towards mechanical effects like friction, abrasion, impact, etc. depends on the resistance that varnish layers show against these effects. Varnished wooden surfaces are exposed to various effects, according to the properties of the place where they are used. Therefore, in order to prevent economic losses, the use of varnish systems which supply optimum efficiency according to the usage area is required. The results of this study demonstrate that nanolacke UV varnish has better resistance properties compared to conventional varnishes in terms of dry film resistance properties like surface hardness and adhesion. As a result, using nanolacke varnishes instead of conventional varnishes can be recommended for furniture and parquet areas for which varnish layer hardness and bonding strength are important.
Although the cost of nanolacke varnish system is higher than that of conventional varnish systems, it is an innovative product providing benefits to the user when its long-term durability and quality factors are taken into account. Furthermore, this nanotechnology product varnish is very original and important in terms of bringing nanotechnology and wood technologies together. | 3,897.6 | 2008-04-01T00:00:00.000 | [
"Materials Science"
] |
Determination of the Metabolite Content of Brassica juncea Cultivars Using Comprehensive Two-Dimensional Liquid Chromatography Coupled with a Photodiode Array and Mass Spectrometry Detection
Plant-based foods are characterized by significant amounts of bioactive molecules with desirable health benefits beyond basic nutrition. The Brassicaceae (Cruciferae) family consists of 350 genera; among them, Brassica is the most important one, which includes some crops and species of great worldwide economic importance. In this work, the metabolite content of three different cultivars of Brassica juncea, namely ISCI Top, “Broad-leaf,” and ISCI 99, was determined using comprehensive two-dimensional liquid chromatography coupled with a photodiode array and mass spectrometry detection. The analyses were carried out under reversed-phase conditions in both dimensions, using a combination of a 250-mm microbore cyano column and a 50-mm RP-Amide column in the first and second dimension (2D), respectively. A multi (three-step) segmented-in-fraction gradient for the 2D separation was advantageously investigated here for the first time, leading to the identification of 37 metabolites. In terms of resolving power, orthogonality values ranged from 62% to 69%, whereas the corrected peak capacity values were the highest for B. juncea ISCI Top (639), followed by B. juncea “Broad-leaf” (502). Regarding quantification, B. juncea cv. “Broad-leaf” presented the highest flavonoid content (1962.61 mg/kg) followed by B. juncea cv. ISCI Top (1002.03 mg/kg) and B. juncea cv. ISCI 99 (211.37 mg/kg).
Introduction
Vegetables from the Brassicaceae or Cruciferae family represent the most commonly consumed vegetables worldwide. This family includes brussels sprouts, broccoli, cabbage, cauliflower, and others. Such vegetables do contain high levels of bioactive compounds, e.g., polyphenols, carotenoids, tocopherols, glucosinolates, and ascorbic acid [1][2][3][4]. Epidemiological data have demonstrated the
Results and Discussion
The analysis of the three different cultivars of B. juncea L. was first run using a conventional LC-PDA-MS approach on a C18 column. As illustrated in the following section, a considerable number of compounds overlapped; consequently, an RP-LC×RP-LC system was adopted in order to attain higher separation power, thus providing a thorough overview of the overall metabolites pool, which is beneficial for quantification purposes.
Elucidation of Brassica juncea Cultivars Using RP-LC×RP-LC-PDA-MS
RP-LC×RP-LC separations have proved to be quite effective for the analysis of the metabolite content of food and natural products [19][20][21][22]25,[30][31][32]. Before running an RP-LC×RP-LC analysis, a careful optimization of the independent separations must be carried out [26,27,29]. A low mobile phase flow rate is preferred in the 1 D in order to decrease the fraction volume onto the 2 D and augment the 1 D sampling rate. Usually, this is achieved by employing a microcolumn in the 1 D; however, since most commercial LC pumps are not capable of delivering a stable and repeatable flow rate, a higher flow rate is commonly employed and split up before entering the 1 D column. A scheme of the RP-LC×RP-LC employed is reported in Figure 1. In this work, a robust and easy-to-use micropump with a completely new direct-drive engineering was advantageously employed, and was capable of delivering micro-to semi-micro flow rates ranging from 1 to 500 µL/min. Repeatability data obtained on four selected peaks are displayed in Table 1. Relative standard deviation (RSD, %) values lower than 0.02 were attained in the case of mean retention times (min), whereas RSD (%) values lower than 1.21 were determined in case of mean areas. With regard to the 2 D, a fast separation is commonly employed in order to increase the 1 D sampling to lower the risk of incurring wrap-around phenomena. Consequently, a microcyano column was chosen in the 1 D, whereas a 4.6-mm I.D. partially porous RP-Amide column was employed in the 2 D and operated at 2 mL min −1 . For fraction transfer, two high-speed, six-port, two-position switching valves equipped with two 10 µL sampling loops were chosen.
In this context, the optimization of the gradient programs, especially for the 2 D, is also necessary for an adequate separation and is mainly related to the chemical properties of the solutes. Late eluting compounds that are retained more in the 2 D require a greater gradient steepness in order not to incur wrap-around effects. In the case of closely related compounds, e.g., early-eluting compounds, which are subjected to co-elutions, a lower gradient of steepness is preferable in order to permit stronger retention.
Following this strategy, a newly developed RP-LC×RP-LC system was investigated. In particular, a multi segmented-in-fraction gradient approach was employed, as illustrated in Figure 2. In particular, three different full-in-fraction gradients were considered for the 2 D analysis. The first gradient was from 10 to 32 min, where %B ranged from 3% to 8% (∆%B: 5) for the analysis of early eluting organic acids; in the second gradient step (from 32 to 43 min), %B ranged from 10% to 44% (∆%B: 34) for the analysis of (acetylated) tri-and tetrasaccharides, whereas in the last one (from 43 to 60 min), %B ranged from 20% to 60% (∆%B: 40) for the analysis of late eluting (acetylated) mono-and disaccharides. The modulation time of the switching valves was 1.00 min. Figure 2 shows the contour plots for the RP-LC×RP-LC analysis of the three cultivars of Brassica juncea, where a total of 37, 34, and 31 metabolites were positively separated using the optimized multi segmented-in-fraction gradient approach.
Concerning the performance of the developed RP-LC×RP-LC system, Table 2 reports the values attained for both peak capacity and orthogonality [33]. The highest theoretical peak capacity values, which are multiplicative of the peak capacity of the two single dimensions [34], were attained for the cultivar ISCI Top (1734), whereas the lowest was obtained for the cultivar ISCI 99 (932). The orthogonality values ranged from 62% to 69% for ISCI Top and "Broad leaf", respectively [33]. The corrected peak capacity values, which considered, both undersampling [35] and orthogonality values, were 639, 404, and 502 for ISCI Top, ISCI 99, and "Broad leaf", respectively. Considering the similarity of the two separation systems employed in both dimensions, such values can be considered quite remarkable and are in agreement with previously published findings on similar set-ups exploited for polyphenolic characterization in licorice (695 in Wong et al. [30]) and pistachio (461-633 in Arena et al. [31]) samples. As an example, the benefits associated with the employment of the developed RP-LC×RP-LC with the multi segmented-in-fraction gradient program over the conventional RP-LC separation are highlighted in Figure 3.
A selected chromatographic region of the Brassica ISCI Top extract ( Figure 3A) clearly shows how the 1D-LC did not provide enough peak capacity for unambiguous characterization of the chemical profile of the three occurring metabolites, due to compound overlapping. However, when the RP-LC×RP-LC analysis was employed, the three different bioactive compounds were conveniently separated and characterized via inspection of the respective MS spectra ( Figure 3B). As a result, the better resolution of the RP-LC×RP-LC separation (with the 2 D operated under the multi (three-step) segmented-in-fraction gradient mode) over the conventional 1D-LC led to a greater metabolite expansion in the RP-LC×RP-LC space, which was essential for improving the reliable identification of compounds with complexity and/or various polarities.
Semi-Quantitative Determination of the Flavonoid Content of Brassica juncea Cultivars
Tentative identification of the Brassica juncea extracts, illustrated in Figure 2, was performed based on their PDA, MS, and literature data [1,2,[9][10][11][14][15][16][36][37][38]. Among the major classes of compounds identified, organic acids, (acetylated) tri-and tetrasaccharides, and (acetylated) mono-and disaccharides, were recognized (Table 3). Due to the lack of commercial standards, quantification of Brassica spp content has so far been carried out after acidic and/or alkaline hydrolysis [36][37][38]. In this work, a quantification of the native flavonoid composition of the three cultivars of Brassica juncea was carried out by RP-LC×RP-LC system coupled to PDA detection for the first time. Toward such an aim, and considering the unavailability of corresponding standard references, an established approach in the field of food and natural product analysis was followed. Basically, three standards, as representatives of the distinct chemical classes, i.e., Km 3-O-glucoside, Isorhamnetin (Is) 3-O-glucoside, and Qn 3-O-glucopyranoside, were chosen and calibration curves were prepared, as reported in Section 3.4.5. Results are shown in Table 4, which reports all the standard curves, correlation coefficients (R 2 ), limits of detection (LoDs) and limits of quantification (LoQs), and relative standard deviations (RSDs) of the peak areas for each standard selected. The five-point calibration curves provided R 2 values ranging from 0.9993 to 0.9997, whereas for LoQ and LoD, values as low as only 30 ppb and 90 ppb, respectively, were found. Finally, RSD values lower than 0.89% were obtained, demonstrating valuable method repeatability. Subsequently, all three samples were analyzed and the contents of the target compounds were calculated using commercially-available software, as reported in Table 3 1D-LC separations were performed on an Ascentis Express C18 column (Merck Life Science, Merck KGaA, Darmstadt, Germany; 150 × 4.6 mm I.D., 2.7 µm dp). LC×LC separations were conducted by using a 1 D Ascentis ES-Cyano (ES-CN) column (Merck Life Science, Merck KGaA, Darmstadt, Germany; 250 × 1.0 mm I.D., 5 µm dp) and a 2 D Ascentis Express RP-Amide column (Merck Life Science, Merck KGaA, Darmstadt, Germany; 50 × 4.6 mm I.D., 2.7 µm dp).
Sample and Sample Preparation
Brassica juncea L. Czern & Coss cv. ISCI 99, ISCI Top, and "Broad-leaf" leaf selections were provided from the Brassica collection of Consiglio per la ricerca in agricoltura e l'analisi dell'economia agraria -Centro di Ricerca Cerealicoltura e Colture Industriali) (CREA-CI) [39]. Samples were immediately frozen and freeze-dried for storage in glass vacuum desiccators. Lyophilized tissues were finely powdered to 0.5 µm size for analysis. Compound extraction was carried out based on the following protocol [16] with some modifications. The powder of the leaves of the three different B. juncea cultivars were weighed into 100 mg samples. The samples were extracted twice with 5 mL of a mixture of methanol:water (60:40, v/v) for 30 min in a sonicator and centrifuged at 1000× g for 15 min, followed by filtration of the supernatants through a 0.45-µm nylon filter (Merck Life Science, Merck KGaA, Darmstadt, Germany). The prepared organic extracts were subjected to evaporation in a EZ-2 evaporator and then redissolved in 1 mL of the same solvent extraction mixture of methanol:water (60:40, v/v).
Instrumentation
LC and LC × LC analyses were performed on a Nexera-e liquid chromatograph (Shimadzu, Kyoto, Japan), consisting of a CBM-20A controller, one LC-Mikros binary pump, two LC-30AD dual-plunger parallel-flow pumps, a DGU-20A 5 R degasser, a CTO-20AC column oven, a SIL-30AC autosampler, and an SPD-M30A PDA detector (1.0 µL detector flow cell volume). The two dimensions were connected by means of two high-speed/high-pressure, two-position, six-port switching valves with a micro-electric actuator (model FCV-32 AH, 1.034 bar; Shimadzu, Kyoto, Japan), placed inside the column oven, and equipped with two 10-µL stainless steel loops. The Nexera-e liquid chromatograph was hyphenated to an LCMS-8050 triple quad mass spectrometer through an ESI source (Shimadzu, Kyoto, Japan).
Data Handling
The LC × LC-LCMS-8050 system and the switching valves were controlled using the Shimadzu Labsolution software (ver. 5.93) (Kyoto, Japan). LC×LC-Assist software (ver. 2.00) (Shimadzu, Kyoto, Japan) was used for setting up the multi (three-step) segmented-in-fraction gradient analyses. The LC × LC data were visualized and elaborated into two and three dimensions using Chromsquare ver. 2.3 software (Shimadzu, Kyoto, Japan).
Construction of Calibration Curves
For flavonoid determination, due to the lack of commercial standards, Km 3-O-glucoside, Is 3-O-glucoside, and Qn 3-O-glucopyranoside, as representatives of the distinct chemical classes under evaluation, were selected. Standard calibration curves were prepared in the concentration range 0.1-100 mg L −1 with five different concentration levels, run in triplicate. The amount of the compound was finally expressed in mg kg −1 of extract.
Conclusions
In this paper, the benefits associated with the use of a multi (three-step) segmented-in-fraction gradient in the RP-LC×RP-LC-PDA-MS analysis of three Brassica juncea cultivars are demonstrated. The coupling of a microcyano and an RP-Amide columns, in the first and second dimension, respectively, provided a characteristic metabolite pattern of the extracts, leading to the identification of 37 bioactives of different chemical nature, i.e., organic acids, (acetylated) tri-and tetrasaccharides, and (acetylated) mono-and disaccharides. Interestingly, the employment of a micro LC pump in the first dimension of the RP-LC×RP-LC-PDA-MS systems allowed for high repeatability and stable retention times and areas. The investigated approach can be advantageously employed for RP-LC×RP-LC metabolic analyses of other complex plant derived extracts. | 2,863 | 2020-03-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Chemistry",
"Environmental Science"
] |
A class of charged relativistic spheres
We find a new class of exact solutions to the Einstein-Maxwell equations which can be used to model the interior of charged relativistic objects. These solutions can be written in terms of special functions in general; for particular parameter values it is possible to find solutions in terms of elementary functions. Our results contain models found previously for uncharged neutron stars and charged isotropic spheres.
INTRODUCTION
The Einstein-Maxwell system of field equations are applicable in modelling relativistic astrophysical systems. We need to generate exact solutions to these field equations to model the interior of a charged relativistic star that should be matched to the Reissner-Nordstrom exterior spacetime at the boundary. A general treatment of nonstatic spherically symmetric solutions with vanishing shear was performed by Wafo Soh and Mahomed [1] using symmetry methods. The matching of nonstatic charged perfect spheres to the Reissner-Nordstrom exterior was considered by Mahomed et al. [2] who showed that the Bianchi identities restrict the number of solutions. Particular models generated can be used to model the interior of neutron stars as demonstrated by Tikekar [3], Maharaj and Leach [4] and Komathiraj and Maharaj [5]. Charged spheroidal stars have been widely studied by Sharma et al. [6] and Gupta and Kumar [7]. There exist comprehensive studies of cold compact objects by Sharma et al. [8], analysis of strange matter and binary pulsar by Sharma and Mukherjee [9] and quark-diquark mixtures in equilibrium by Sharma and Mukherjee [10], in the presence of the electromagnetic field. Charged relativistic matter is important in the modelling of core-envelope stellar systems as demonstrated by Thomas et al. [11], Tikekar and Thomas [12] and Paul and Tikekar [13]. The recent treatment of Thirukkanesh and Maharaj [14] deals with charged anisotropic matter with a barotropic equation of state which is consistent with dark energy stars and charged quark matter distributions.
The exact solution of Tikekar [3] is spheroidal in that the geometry of the spacelike hypersurfaces generated by t=constant are that of a 3-spheroid. This condition of a spheroid helps to mathematically interpret the solution since it provides a transparent geometrical interpretation. On physical grounds we find that this solution can be applied to model superdense stars with densities of the order 10 14 g cm 3 . The physical features of the Tikekar model are therefore consistent with observation, and consequently it attracts the attention of several researches as a realistic description of the stellar interior of dense objects. This solution was extended by Komathiraj and Maharaj [5] to include the electromagnetic field, with desirable physical features. In this paper we show that a wider class of solutions to the Einstein-Maxwell system is possible by adapting the form of the gravitational potentials. Our intention is to obtain simple forms for the solutions that are physically reasonable and may be used to model a charged relativistic sphere.
SPHERICALLY SYMMETRIC SPACETIME
The metric of static spherically symmetric spacetimes in curvature coordinates can be written as where ν(r) and λ(r) are two arbitrary functions. For charged perfect fluids the Einstein-Maxwell system of field equations is given by for the line element (1). The quantity ρ is the energy density, p is the pressure, E is the electric field intensity and σ is the proper charge density. To integrate the system (2) it is necessary to choose two of the variables ν, λ, ρ, p or E. In our approach we specify λ and E.
In the integration procedure, we make the choice where k and l are arbitrary constants. Note that the choice (3) ensures that the metric function e 2λ is regular and finite at the centre of the sphere. When k = −7 and l = 1, in the absence of charge, we regain the Tikekar interior metric [3] which models a superdense neutron star. Also observe that when l = 1 we regain the metric function considered by Komathiraj and Maharaj [5] which generalises the Maharaj and Leach [4] and Tikekar [3] models. Therefore particular choices of the parameters k and l produce regular charged spheres which are physically reasonable. Also the choice (3) ensures that charged spheres generated, as exact solutions to the Einstein-Maxwell system, contain well behaved uncharged models when E = 0. On eliminating p from (2b) and (2c), for the choice (3), the condition of pressure isotropy with a nonzero electric field becomes which is nonlinear. To linearise the above equation it is now convenient to introduce the transformation where l = 0. This transformation helps to simplify the integration procedure but changes the form of the potentials and matter variables. Then (4) becomes in terms of the new dependent and independent variables ψ and x respectively. Equation (6) must be integrated to find ψ, i.e. the metric function λ.
Note that the Einstein-Maxwell system (2) implies in terms of the variable x. Note that we have essentially reduced the solution of the field equations to integrating (6). It is necessary to specify the electric field intensity E to complete the integration. Only a few choices for E are physically reasonable and generate closed form solutions. We can reduce (6) to simpler form if we let where α is constant. When α = 0 or k = 0 there is no charge. The form for E 2 in (8) vanishes at the centre of the star, and remain continuous and bounded in the interior of the star for a wide range of values of the parameters α, k and l. Upon substituting the choice (8) into (6), we obtain which is the master equation for the system (7). We expect that our investigation of equation (9) will produce viable models of charged stars since the special cases α = 0 and α = 0, k = 0, l = 1 yields models consistent with neutron stars.
NEW SOLUTIONS
As the point x = 0 is a regular point of (9), there exists two linearly independent series solutions with centre x = 0. Thus we must have where a i are the coefficients of the series. For an acceptable solution we need to find the coefficients a i explicitly. On substituting (10) in (9) we obtain after simplification The equation (11) is the basic difference equation governing the structure of the solution. It is possible to express the general form for the even coefficients and odd coefficients in terms of the leading coefficient a 0 and a 1 respectively by using the principle of mathematical induction. We generate a pattern for the even coefficients a 0 , a 2 , a 4 . . .. Also we find the pattern (13) for the odd coefficients a 1 , a 3 , a 5 . . .. Here the symbol denotes multiplication.
From (10), (12) and (13), we can write the general solution of (9) as where we have set Thus we have found the general solution to the differential equation (9) for the particular choice of the electric field (8). Series (15a) and (15b) converge if there exists a radius of convergence which is not less than the distance from the centre of the series to the nearest root of the leading coefficient in (9). This is possible for a range of values of k and l. The general solution (14) is given in the form of a series which may be used to define new special functions. For particular values of the parameters α, k and l it is possible for the general solution to be written in terms of elementary functions which is a more desirable form for the physical description of a charged relativistic star. Solutions that can be written in terms of polynomials and algebraic functions can be found. This is a lengthy and tedious process and we therefore do not provide the details; the procedure is similar to that presented in Komathiraj and Maharaj [5] which can be referred to. The solutions found can also be verified with the help of software packages such as Mathematica. Consequently we present only the final solutions avoiding unnecessary details.
Two classes of solutions in terms of elementary functions can be found. These can be written in terms of polynomials and algebraic functions. The first category of solution for ψ(x) is given by (16) with the values The second category of solution for ψ(x) has the form with the values where A and B are arbitrary constants and x 2 = 1 − lr 2 /R 2 .
SPECIAL CASES
From our general class of solutions (16) and (17), it is possible to generate particular cases found previously. These can be explicitly regained directly from the general series solution (14) or the elementary functions (16) and (17). We demonstrate that this is possible in the following classes.
If we set l = 1 then (16) and (17) contain the solution of Komathiraj and Maharaj [5] for charged spheres which are generalizations of earlier models with spheroidal geometry.
DISCUSSION
We have studied the Einstein-Maxwell system of equations for a particular choice of the electric field intensity. The gravitational potential was generalised to include the spheroidal geometry of the hypersurfaces t=constant of previous investigations. When l = 1 then we regain the Tikekar [3] model and other exact solutions found previously. We demonstrated that it was then possible to reduce the condition of pressure isotropy to a second order linear ordinary differential equation. This equation can be solved in general using the method of Frobenius and the solution are in terms of new special functions. Solutions in terms of elementary functions can be extracted from the general solution for specific parameter values. Particular models studied previously are contained in our general class of solution. These solutions may be useful in studying the physical behaviour of dense charged objects in relativity which will be the objective in future work.
We briefly discuss the behaviour of the matter variables close to the centre. We can graphically represent the the matter variables in the stellar interior for particular choices of the parameter values. To this end we have produced Figure 1 with the help of the software package Mathematica. We have set A = B = C = 1, k = − 1 4 , l = −1 and α = 3 2 over the interval 0 ≤ r ≤ 1, to generate the relevant plots in Figure 1. Plots A and B denote the profiles of energy density ρ and the pressure p; plot C denotes the electric field intensity E 2 . We observe that these matter variables remain regular in the interior. We note that the energy density ρ and the pressure p are positive and finite; they are monotonically decreasing functions in the interior. The electric field intensity E 2 is positive and monotonically increasing in this interval. Thus the quantities ρ, p and E 2 are finite, continuous in the interval. | 2,545.2 | 2010-12-01T00:00:00.000 | [
"Mathematics",
"Physics"
] |
LATIN-AMERICAN WRITING INITIATIVES IN ENGINEERING FROM SPANISH-SPEAKING COUNTRIES
his article analyzes Latin-American publications from Spanish-speaking countries to map programs pursued in the region and then provide a context to envision further research agendas for Latin-American Writing Studies in Engineering. he analysis of 22 publications suggests that initiatives and studies in Engineering are recent (as of 2009). he sample reveals an emphasis on pedagogically-oriented publications focused on Engineering as one ield. he trends suggest that the Latin-American writing advocates in Engineering might broaden research scopes by incorporating theoretical frameworks for a) exploring and understanding diferent roles of writing across time and curriculum in student learning and by Engineering subields and, b) exploring theoretical approaches to understand genres beyond individual texts (genre repertoires and genre systems).
Introduction
In Latin America there is no ield known as writing studies and there is a lack of systematic research to document the history and the emergence of the ield (Navarro et al., 2016).However, studies and pedagogic initiatives on reading and writing in higher education date back to 2000.An interregional project titled "Initiatives of Reading and Writing in Higher Education in Latin America (ILEES)" conducted between 2013-2015 aimed at mapping endeavors in Argentina, Chile, Colombia, Mexico, Puerto Rico, and Venezuela 1 .ILEES data show that Latin American writing advocates are ailiated to diverse disciplinary ields: Language Sciences, Applied Linguistics, Systemic Functional Linguistics, Education, Psychology, and Humanities.Furthermore, there are no central venues in the region, journals or associations, on professional or disciplinary writing; thus, academic publications primarily circulate in journals ailiated to Linguistics and Language Sciences. 2he ILEES project also reveals that the most frequent initiative across countries is courses on freshman composition in Spanish, although writing centers and programs are rapidly emerging in the region (Narváez, 2014). he analysis of information available on the websites of writing centers and programs reveals that despite an advocacy for supporting disciplinary writing endeavors, online materials (guidelines, activities, forums) portray academic writing as a general skill in freshman years regardless of disciplines, since writing support is focused on essays, book reviews, literature reviews and a general genre named "writing assignments" (Narváez, 2015).herefore, in Latin America there is no speciic ield of writing in the disciplines (WID), in general, or Technical Communication, in particular.
he interdisciplinary ield of Technical Communication has developed in the U.S. as part of historical and evolutionary interactions among economic changes, scientiic developments, and digital technology innovations.he ield has been fostered especially by advancement of engineering and related disciplines which generate both scientiic knowledge and inancial proit.his interdisciplinary ield has emerged to research and teach communication practices within institutions (companies, research centers, civil organizations, government, and universities) associated with scientiic and technological changes and corporate capitalism in North America.
In Latin American countries, global trends of science, innovation, and technology transfer appear as part of higher education reforms and funding research agendas.Regardless of the gap in progress and the presence of diverse needs among regions and economies, Latin-American faculty and students in Engineering are asked to produce cutting-edge knowledge and artifacts within a ierce competition.Consequently, contributions of technical communication programs are valuable across regions, especially in Spanish Latin America, provided they are developed to help stakeholders by taking critically into account their local conditions and challenges of global and transcultural demands.
Given the emerging Latin-American body of initiatives and studies in Engineering writing, this article aims at exploring features of these speciic endeavors reported by publications.Journal articles and oral communications have been explored to provide tentative answers for the following research questions: a.When his article takes into account the important call for conducting research about writing and communication across cultures avoiding an ethnocentric bias (hatcher, 1999; 2000, 2001, 2005, 2009, 2010).To do so, the aforementioned questions are, in the irst place, answered by describing the sample and the results of the analysis of Latin-American publications on Engineering writing.Since the ultimate goal of this study is to provide a context to boost agendas for Latin-American Writing Studies in Engineering and the development of Technical Communication Programs in the region, 3 a section framing technical communication programs in the U.S. is presented and utilized, in the discussion section, to identify shared features and diferences between the Latin-American endeavors and the U.S. programs.his contrast is useful to envision Latin-American research agendas by valuing what has been locally developed and identifying international debates in which Latin-American writing advocates can contribute in the ield of Technical Communication.
A qualitative analysis of publications that report Latin-American writing initiatives in Engineering
Since there are no central venues (either journals, professional associations, or open-access projects) to easily access the literature on writing studies in Latin America, in general, and Engineering writing, in particular, the sample in appendix 1 was comprised of 22 publications (journal articles and oral communications) collected as a convenience sampling.I revised the published memoirs in CD of two academic events 4 I participated in August 2014 by electronically searching across the iles with the label "Engineering".During the same period, I emailed Latin-American colleagues I knew they worked on Engineering writing and asked for their publications.By using this data gathering, I collected 16 publications.In December 2014, the irst Latin-American Data Base on reading and writing studies was launched (Cisneros, 2014), 5 and I also searched by using the tag "Engineering"; 6 other publications were gathered.
Titles, abstracts, and sections of conclusions and discussion of the publications were read to conduct the analysis.Table 1 presents the relationship between the research questions and the analytical categories utilized: What might count as "genre": here is a methodological debate in the ield of Genre Studies about theoretical and methodological limitations in describing and labeling genres (Devitt, Bawarshi and Reif, 2003;Herrington & Moran, 2005;Gardner & Nesi, 2012); therefore, the following categories were deined in advance to analyze the notion of "genre" as a theoretical entity based on the overview presented by Bawarshi and Reif (2003): • An individual textual-linguistic entity.
• As a group of linguistic, textual, and/or rhetorical patterns of language use that has been called "genre families" or "genre repertoires".
• As a language usage and circulation enacted by diferent participants/roles within discourse communities and actual institutions (genre systems).
What Engineering ields have been of interest? Mentions of Engineering ields, if any
This paper aims to analyze on writing guidance and assessment ofered in a course titled "Sotware Engineering Processes" and its impact in students for learning content knowledge.he research relies on theoretical frameworks on Academic Literacy and Writing across the Curriculum.Writing assignments were designed collaboratively between the instructor of the course and a writing instructor.A case study was conducted based on this experience (…) he results show as inluential elements in the learning process: instructors' guidelines, evaluation rubrics, instructor's feedback, student-teacher conferences, assigning writing tasks with speciic purposes, as well as opportunities to write drats and present progress reports (…) Reasons for studying or leading initiatives on Engineering writing 3. What theories of genre have framed the studies and initiatives? he trend of the initiatives and studies being primarily focused on incorporating writing to learn might be also related to 14 of the 22 articles in the sample thematizing genres as textual-linguistic entities.Out of 14 cases, 4 publications are research-oriented, 6 are pedagogically-oriented, and 4 are research on pedagogical experiences.he following fragment illustrates these cases by presenting an excerpt of an Argentine study about a pedagogic intervention focused on a professional Engineering genre named "the yearly memory": In this research, we present a didactic design that articulates research work about professional genres and disciplinary contents based on teaching a professional genre.More speciically, it is about an experience that took place in an Economics course for Engineers in which the speciic teaching of a professional genre, the yearly memory, is addressed in relation with other contents of the course program (…) he tendency on writing to learn approach throughout the publications might be connected with some of the types of genres mentioned in the sample (i.e., writing assignments, summaries, and computinglaboratory tasks) (table 2).
Writing assignments 3
Computing laboratory tasks 2 Summaries 2
Annual report 1
Lab reports 1 Reports of formative research experiences 1
Research report 1
Standardized operating procedure 1 Engineering students.he ultimate purpose in using this strategy was to raise students' consciousness about the key role of writing in their learning process.Unlike traditional summary writing, these ones were not written for the professor for him to assess whether the students had reviewed course contents.Instead, summaries provided the students with an opportunity to monitor their own learning process and express what they had learned in a written format (...) At least 13 cases of the studies/initiatives reported have implied certain degree of institutionalization (i.e., they have had departmental supports and have been part of curriculum reforms or products of faculty development programs).his tendency is illustrated by the next example about a research project conducted as part of an Argentine institutional program for teaching academic and professional literacy across curriculum: Example 6 Article title: "Professional literacy during the university career: between the university and the enterprise" (2015) Abstract (…)his paper presents some partial results of an ongoing research project about writing practices in the Industrial Engineering ield.As part of the activities of an institutional program for teaching academic and professional literacy across the university curriculum (PRODEAC) developed at Universidad Nacional de General Sarmiento (UNGS), we surveyed a group of engineers in order to analyze the genres employed in professional settings (…) 6. What type of language is studied or taught?
In 16 cases in the sample, it was not possible to identify the type of language addressed/studied in the initiatives/studies of the publications; however, the cases analyzed suggest that there is an attempt to address linguistic and mathematical systems by articulating them (4 cases), as well as see language as multimodal (1 case). he following case illustrates with a research project that aimed at describing and quantifying multisemiotic artifacts emerging from a Chilean corpus collected in twelve PhD programs:
ABSTRACT
From strictly linguistic studies, the characterization of multisemiotic written specialized texts has been scarce or almost null.Not many corpus-based studies focus on the description of graphs, tables, and diagrams, as well as their layouts, as part of academic texts.he objective of this study is to identify, describe and quantify the occurrence of (multi)semiotic artifacts which are present in a sample of texts (1.043) belonging to the PUCV-2010 Academic Corpus. he corpus was collected in twelve PhD programs in six Chilean universities and comprises all the documents students are given to read during their formal curricula, with the exception of those included in the inal doctoral research (3,160 written texts, which are distributed among Physics, Chemistry, Biotechnology, History, Literature, and Linguistics) (…)
Are computers part of the studies and interventions
on Engineering writing?
Computers were mentioned as part of the initiatives/studies in 3 cases (reported as part of the same pedagogical experience).he case below shows these types of mentions in an Argentine action-research experience in which students were asked to perform computer assignments in a calculus course: Example 8 Article title: "Early error detection: an action-research experience teaching vector calculus" (2013) his paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina.Vector calculus students played an active role in their own learning process.hey were required to present weekly reports, in both oral and written forms, on the topics studied, instead of merely sitting and watching as the teacher solved problems on the blackboard. he students were also asked to perform computer assignments, and their learning process was continuously monitored (…) he next case is an illustration of conclusions implying the trend in the approach "writing to learn".According to the evaluation of an Argentine pedagogic intervention that was relying on writing weekly reports, it seems that some of the students considered the approach very demanding: texts, and weekly reports for solving problems with theoretical justiications).In junior and senior years, research-oriented writing has been incorporated (e.g., research reports) (table 4).
Table 4 he genres mentioned in diferent learning moments of the initiatives pedagogically-oriented he qualitative distribution of individual genres mentioned by the publications pedagogically-oriented conirms that in freshman and sophomore years one of the pedagogical goals has been writing to learn (i.e., computer laboratory tasks, reading mathematical
Example 9 Article title: "Improvement of Mathematics teaching and learning in Bioengineering: a challenge assumed from action research" (2012)
A survey was designed and implemented to the students in the courses 2008, 2009 and 2010, with the purpose of evaluating their perception of the weekly report.he questions and results are shown in igure 3. 85% of the students regard the experience as useful and necessary for their performance, since it helped to understand concepts and correct mistakes.On the other hand, 15% of the students said the experience was not positive, most of all for the lack of respect of individual learning pace of students and for being too demanding in terms of time.
he tendency of the publication in the approach "writing to learn" might be also related to fact the publications were primarily pedagogically-oriented or reporting research on pedagogical experiences.Furthermore, the third frequent mention in the conclusions related to positive evaluations of interdisciplinary work (6 cases) conirms that interdisciplinarity has been a feature of the initiatives/ studies undertaken (16 cases).
he speciic conclusion about "Interdisciplinary work is an opportunity" emerged in 5 cases in publications pedagogically-oriented (3 are research on pedagogical experiences, and 2 are pedagogicallyoriented), and in 1 case a research-oriented article.
he analysis of the topics on limitations stated by the publications also suggests that the writing advocates demand certain conditions related to class sizes in the cases of initiatives/studies pedagogicallyoriented; for instance, the amount of students is mentioned in the following Argentine case as a hindering issue in incorporating writing as part of the Engineering courses:
Example 10 Article title: "Creating an educational research space in an Engineering department" (2009)
Regarding the negative aspects and diiculties in its implementation, "it is diicult to distribute the time between the control of individual work and group discussion to explain the subjects in which frequent errors of the group are detected"."he high number of students by section hinders a more personalized monitoring".Furthermore, it is stated that despite the writing supports introduced by these initiatives, the students still struggle in developing audience awareness.
As for implications derived from the studies/ initiatives, the writing advocates envision studies to explore disciplinary features of writing and communication in Engineering as well as professional genres.his particular implication is regarded in the publications as useful to inform pedagogical initiatives across the curriculum and design teaching materials and resources.he following fragment illustrates these cases by presenting an excerpt of a Chilean study describing multisemiotic artifacts from a corpus of readings at PhD levels: Research studies as the one described here also have pedagogical implications concerning: (a) the selection of written genres, (b) the elaboration of teaching materials, and (c) the preparation of language tests of various kinds, such as the assessment of disciplinary contents and of specialized discourse comprehension.
Furthermore, this analysis also reveals that the authors of the publications are advocating for pedagogic initiatives that bridge academic and professional writing practices in Engineering, which might suggest that it is already acknowledged this double nature of writing and communication in the Engineering ield (3 cases of publications research-oriented).
Technical communication programs in the U.S.: teaching and research agendas
Since the ultimate goal of this study is to provide a context to boost agendas for Latin-American Writing Studies in Engineering and the development of Technical Communication Programs in the region, the following section is framing technical communication programs in the U.S., which will be utilized, in the discussion section, to identify shared features and diferences between the Latin-American endeavors and the U.S. programs.his contrast is useful to envision Latin-American research agendas by valuing what has been locally developed and identifying international debates in the ield of Technical Communication to which Latin-American writing advocates can contribute.
he status of the U.S. Technical Communication Programs
Several studies have mapped the scope of the ield of Technical Communication in order to understand the history, the inluence of diferent disciplinary traditions, and tensions between theories and practices in academia and workplace settings (Selfe & Selfe, 2012).Overall, the ield is acknowledged as interdisciplinary, since it shares and borrows methods, theories, and even content areas from Design Communication, Speech Communication, Rhetoric and Composition, Psychology, Education, and Computer Science (Spilka, 2002;Rude, 2009).his interdisciplinarity might be related to an unformed disciplinary identity and the lack of external recognition that has been attributed to multiple causes: a) the relative newness of the ield as an area of inquiry; b) the assumption that the ield on technical communication ofers "services", in both corporate and academic settings, for more dominant ields such as Engineering, Information Technology, and Business; and, c) the adjunct status of technical communication programs within the English departments (in which they are usually housed), and within the broad ield of Rhetoric and Writing (Rude, 2009).
Universities oten treat Business and Technical Communication Programs (BTC) similarly to irst-year composition (FYC), that is, as a course about academic writing as a general domain instead of a professional or disciplinary practice.As happens in FYC courses, BTC courses for students who are not pursuing Engineering or Science as their majors perceive the classes as service courses, and their instructors must struggle for status and identity within their universities (Russell, 2007).Technical communication instructors have reported they oten feel like outsiders within English departments.Regarding work conditions, technical communication instructors are typically hired as lecturers; however, sometimes instructors in writing programs are better situated in terms of status, community, and respect (Reave, 2004).
Diverse student population shapes the BTC programs.In technical communication courses, all the students are Engineering majors; whereas in courses ofered outside the schools, students from other majors attend.A diverse student body is more typical for elective courses (Reave, 2004).
Furthermore, "writing to learn" and "learning to write" 8 are still part of the pedagogical debates in the ield, because BTC courses are seen as more disciplinary or professionally orientated than irst-year composition courses (FYC), which are mainly associated with general education classes. 9BTC research and pedagogy have mainly focused on workplace communication and preparing students for workplace communicative practices (Russell, 2007).As a result, these courses typically are comprised by a more cohesive student population in terms of their majors than those in liberal arts courses (FYC associated with general education) (Russell, 2007).
he academic unit housing the programs of technical communication is also an issue of debate mostly because of the diversity of approaches from humanities, professional, and science communication (Maylath, Grabill & Gurak, 2010).For instance, since Departments of English ofer "services" to Engineering majors, teaching this type of course is seen as outside of the main purpose of English departments.herefore, technical writing instructors (and scholars) are seen as subservient to both Engineering and English but essentially at home in neither place (Yeats & hompson, 2010).Overcoming the tensions between English departments and technical communication programs is thus seen as an important issue in the agenda of the U.S. ield (Bernhardt, 2002). he tensions might be overcome insofar as it is accepted that technical writing has humanistic value by exploring rhetorical features in science communication even under expectations of objectivity (Miller, 2004).Cultural beliefs tend to treat "technology" and "data" as ixed and unbiased objects, but these assumptions change if science production is acknowledged as a rhetorical act, that is, knowledge production as a result of interpersonal negotiations in interpreting evidence (Winsor, 1996).
Debates over the status of the ield have therefore inluenced research and teaching agendas.One of the visible debates is how to deine the ideal practices of the ield.If technical communication and writing simply draw on the best practices carried out by industry and other workplace settings, non-academic environments would be producing authoritative knowledge, which leaves a narrow intellectual space for the discipline.Instead, if technical communication and writing are more than instrumental means predetermined by others, academic knowledge on technical communication and writing can advise about qualities of workplace practices (Monberg, 2002;Boettger, & Lam, 2013).Furthermore, under the importance of models on "expanding learning" (Bernhardt, 2002), 10 a research agenda is needed to provide information about transitions and overlapping practices between industry and academic settings; this means that academia and workplaces should interact to boost knowledge propagation and innovation between these two contexts.
he research agenda of the ield of Technical Communication must also include studies on what has been marginalized or silenced in order to democratize participation of those who have been seen as historically relegated in organizations (hralls & Blyler, 2002).Participatory research is one way to give voice to those who are being investigated (hralls & Blyler, 2002).his study of marginalization also includes the study of tacit system of values and commitments within bureaucratic hierarchies, high technologies, and corporate capitalism that are embedded in technical communication practices (Miller, 2004).his topic is part of cultural studies in which power and contradictions are explored as ongoing and emerging forces in interaction within organizations (hralls & Blyler, 2002).
Technical communication and digital technologies
Studies have also documented emerging technical communication practices because of the inluence of digital technologies.Developments in mobile technology have encouraged more cooperative work that is contingent and not physically-situated.On one hand, more people can work anywhere by telecommuting, collaborating electronically, and running their own business with mobile phones and laptops; however, on the other hand, freedom to work anywhere oten means isolation and diiculty to build trust and relationships with others, which brings restricted opportunities for collaboration and networking but opportunities for co-working.As a result, co-working is an emerging technical communication practice that has been recontextualized by other inter-organizational activities as freelancing, virtual teams, and peer production (Spinuzzi, 2012).
Because of the inluence of digital technologies in technical communication practices, some scholars have claimed that the main goal of BTC programs is to increase students' marketability to be prepared for the job market, mainly by increasing their skills for documentation process in the age of cooperativetechnological interaction.Accordingly, practitioners are expected to learn problem-solving and analytical skills, writing ability, and lexibility to keep learning on new digital writing situations (Kim & Tolley, 2004;Yeats & hompson, 2010).his is why computer literacy is visible in the pedagogical agendas.here are diferent types of courses: a) skill courses for every day computer literacy at workplaces; b) courses on hardware and sotware for technical communication practices; c) courses on desktop publishing or graphic design programs that are focused on costefective productions for organizations; d) publication management courses; e) computer intensive instruction in introductory writing courses; and, f) computer literacy courses to build critical awareness on digital reading and writing practices.Pedagogies on critical computer literacy particularly advocate for ideological analysis of literacy practices surrounding computer usage against to computer-skills pedagogies (Selber, 2004).
Subject matter of the courses: science communication, professional genres, research skills, and journal publication
Besides including digital communication practices in the curriculum, pedagogical agendas also stress the role of drawing in learning science.Students are encouraged to represent knowledge through drawing, observation, recording, and making inferences as opportunities to learn inductively on scientiic concepts.Diferent goals are pursued: a) encouraging students to produce visual texts; b) increasing awareness of rhetorical efects of graphics and images in science (semiotic realities created by disciplinary communities) rather than groups of facts (Lerner, 2009).
here are, for instance, pedagogical experiences in which students are taught about how scientists make complex decisions in designing experiments and reporting quantitative data for creating claims while addressing their peers.Students are supported for understanding diferences between "raw data" (i.e., jots, plots, notes, outputs, visual traces produced in labs) and "evidence" for publications and communications ("selected data") (Poe, Lerner & Craig, 2010).Systematizations show that students struggle in learning how to gain conidence from data, decide the best "evidence" for dissemination, and deal with sotware or other technical issues while interpreting data; especially because students tend to believe that there is a "correct way" to present visual data instead of creating visual evidence rhetorically-oriented.herefore, these types of interventions help students to a) make decisions according to their audiences by selecting a "meaningful" subset of "raw data" (i.e., using data to "make a case" of their work); b) describe detailed explanations of the indings; and, c) avoid forcing data to it in a theory to rather identify "interesting indings" (Poe, Lerner & Craig, 2010).
Regarding Engineering genres to teach, proposals, progress reports, and completion reports are mentioned (Artemeva, 2005).Job applications and cover letters are also suggested as contents for upper division courses, because freshman or junior students have no experience in workplace cultures.hus, these students have no incentive to look carefully and relectively at past experiences to ind applicable strategies; ultimately, these writing tasks might have no relevance beyond the grade for novice students (Quick, 2012).
Teaching academic research skills for workplaces is also highlighted in the pedagogical agendas through, for instance, encouraging students' publication of capstone experiences that address audiences beyond classrooms.hese initiatives are undertaking by the following stages: a) literature review and research about a speciic topic of the ield; b) oral presentations for department staf and guests (practitioners); c) conference presentations; and, d) journal submissions.hese initiatives also advocate for journal internships in which students learn how to write to be published and how to manage the publishing backstage (i.e., the administrative side of communicating with reviewers and authors, as well as the technical side of journal production by web design, content management, website updating, and e-journal promotion) (Ford & Newmark, 2011).
Teaching technical communication from a rhetorical approach
Teaching on technical writing and communication as a rhetorical practice is strongly encouraged to defy the positivist view of knowledge.Much technical writing teaching maintains the legacy of positivist perspectives by focusing on style, organization, and tone; audience analysis in this perspective is mostly limited to adapting vocabulary of texts.his approach reinforces the view that scientists transmitted "physical realities" by writing (e.g., emphasizing the use of strategies as impersonal voice).Conversely, within rhetorical approaches, one of the subject matters must be "science as argument", which implies learners of technical communication become persuasive professionals embedded in rhetorical situations of scientiic and technological domains (Miller, 2004).
In this approach, learning experiences must include projects to interact with diferent audiences and conduct shared assignments (i.e., across disciplines, diferent degrees of expertise, academic and industry audiences, diferent types of stakeholders within and across organizations).In shared assignments, language assumes the function of satisfying a real need outside of the language classroom (Tatzl, Hassler, Messnarz & Flühr, 2012).However, this efort is challenging because of the heterogeneity of student population in classrooms in terms of their expertise and ield ailiations, especially in the case of BTC courses of general education requirement (McDaniel & Steward, 2011).Team teaching among researchers, practitioners, language instructors, and disciplinary instructors is acknowledged as a pedagogical strategy, but it is rarely implemented due to high costs.In these types of initiatives it is not enough to expose students to diverse audiences; even when they are immersed in an actual workplace setting and are assigned authentic writing tasks, students might not fully understand the implications of their work because they are accustomed to write for pleasing a single-person audience (the instructor) to earn a grade.herefore, students still need scafolding when writing for real workplace audiences (Quick, 2012).
Also, studies of communication skills required by workplace settings have called for curriculum reforms to meet needs of organizations (Reave, 2004).One of the goals is to design learning experiences that expose students to diverse communicative and writing practices according to diferent organizational positions that students could achieve in their professional future, which includes internships and interdisciplinary curriculum that teach about management and business reasoning (McDaniel & Steward, 2011).
Cross-cultural rhetorical studies of professional communication explore workplace literacies and organizational cultures.hatcher (1999; 2000; 2001) studied diferences in cultural expectations between U.S. and South-American professionals with respect to the context provided in oral and written communication, the abstraction or particularity, and the cultural functions of writing and orality.Such rhetorical similarities and diferences can be explained by cultural assumptions about authority, leadership, collectivism, individualism, and work relationships (hatcher, 2005).
Based on surveys, ethnographies, and discoursebased interviews of documents written in English and Spanish, hatcher (1999) claims that rhetorical preferences of every culture must be understood by communication professionals to anticipate challenges, such as: translating documents of intercultural companies, writing procedures and policies in diferent languages, and understanding diverse interpretations of regulations mediated by documents.hatcher's advocacy for intercultural rhetorical studies highlights research approaches in which U.S. models of rhetoric and communication cannot be used as means for cultural domination; instead, comprehensive and emic research methods beyond textual analysis of documents are suggested to comprehend rhetorical cultures.He advocates incorporating researchers from the regions that will be investigated and establishing a common ground in which to operationalize and compare the variables of both cultures in the intercultural context (hatcher, 1999; 2000; 2001; 2005).
Assessment agendas
Regarding assessment in technical communication programs, some scholars suggest institutional assessment of curriculum and learning outcomes according to institutional values of universities; this type of assessment can provide insights for planning (Allen, 2010).Concerning student performance and learning outcomes, assessment on "polymorphic literacy" (i.e., writing as a performance that involves both images and language) is also advised.Moreover, learning assessment of changes in students' performances across time is also recommended; for instance, through validated rubrics to be applied to students' writing samples (Coppola & Elliot, 2010). 11
Discussion
In Latin America there is no speciic ield equivalent to Technical Communication as developed in the U.S.; however, there is a body of emerging Latin-American writing initiatives and studies in Engineering.his section identiies shared features and diferences between the Latin-American endeavors and the U.S. programs by contrasting the results of the analysis of 22 publications on Latin American Engineering writing and the literature review on Technical Communication in the U.S. above.his section follows hatcher's call for conducting crosscultural comparisons by honoring local conditions pertaining to professional communication, higher education, and Engineering writing.
he analysis of Latin-American publications that report initiatives pedagogically-oriented suggests that the ield has emerged primarily focused on the approach "writing to learn"; however, designing writing initiatives that prepare students for workplace communicative practices (Russell, 2007) emerges as a topic of the conclusions, limitations, and implications of the publications analyzed.For instance, as part of the conclusion sections, some of the publications aimed at a) exploring if professional genres have been taught in college years (Natale & Stagnaro, 2015;Natale, 2015); b) diferentiating between professional genres and academic genres (Parodi, 2010a); and, c) collecting samples of professional genres from instructors in senior years (Añino et al., 2010).
Regarding limitations and implications related to professional genres, the publications mentioned, for instance, that there is a lack of descriptions of professional genres to inform curriculum design (Natale & Stagnaro, 2012); thus, further research about professional genres is suggested (Natale & Stagnaro, 2012;Stagnaro, Chiodi & Miguez, 2012).Moreover, incorporating professionallyoriented tasks from freshman years is also stated as an implication of the publications (Parodi, 2010a;Natale & Stagnaro, 2015;Natale;2015). he analysis of conclusions and limitations suggests that in the Latin-American case it is not visible, at least in this sample, tensions related to humanities, professional, and science communication approaches mentioned in the North American literature.
Furthermore, in the Latin-American case, interdisciplinarity (e.g., writing instructors and disciplinary professors or practitioners) is praised through topics of the conclusions (Añino et al., 2011;Gómez, 2014;López & Martínez, 2014;López & Molina, 2015;Natale & Stagnaro, 2015) and envisioned as a further implication (Añino et al., 2010).his Latin-American climate in favor of interdisciplinarity might be also associated with the most frequent type of pedagogical goal undertaken thus far: "writing to learn", since the type of writing assignments might count as low-stake writing assignments (i.e., summaries and weekly reports for solving problems in table 1 and 3) that do not necessarily introduce room for critical stances or contradictory systems of values either for students or instructors involved in the initiatives (for both, Engineering professors and writing advocates).Possibly, when teaching on professional and scientiic writing and communication is incorporated in the Latin-American region, diferent negotiations with disciplinary professors and practitioners might create spaces to start conversations about objectivity or rhetorical production of professional and scientiic knowledge in Engineering.
As happens in the U.S. scholarship, there is an interest reported by some of the Latin-American publications in pursuing research agendas for providing information about transitions and overlapping practices between industry and academia (Bernhardt, 2002;Parodi, 2010;Natale, 2015;Natale & Stagnaro, 2015).One of the Latin-American publications analyzed stresses as a limitation that teaching professional genres is artiicial (Natale, 2015), and in three cases the publications claim for bridging academic writing practices and professional writing practices (Parodi, 2010a;Natale & Stagnro, 2015;Natale, 2015). he U.S. cultural studies making visible and complicating tacit values and commitments of bureaucratic hierarchies, high technologies, and corporate capitalism embedded in the ield (hralls & Blyler, 2002;Miller, 2004) did not emerge from the Latin-American sample.Further analysis of Latin-American publications associated with organizational communication, for instance, might provide insights on this particular issue about critical studies on writing and communication practices within local companies.
he emphasis on writing to learn and the emerging call for further research on professional genres might suggest that in the Latin-American case the debate on what should be the goal of the writing courses in Engineering is not yet present (at least in the sample analyzed), such as the goal of increasing students' marketability (Selber, 2004).his conlict between professional or humanistic role of writing courses might be further traced in Latin-American publications that report studies or initiatives on freshman writing courses, since this is the most frequent pedagogic intervention in the region (see ILEES project at http:// english.ilees.org/).he overall analysis shows that research and initiatives on Engineering writing are advocating primarily for writing to learn and the tendency throughout the publications is to mention genres as individual texts to support student learning (i.e., writing assignments, summaries, computer laboratory tasks, reading mathematical texts, and weekly reports for solving problems with theoretical justiications).hese trends suggest that the Latin-American writing advocates in Engineering might broaden research scopes by incorporating theoretical frameworks for: a. exploring and understanding diferent roles of writing across time and curriculum in student learning and by Engineering subields (e.g., diferences over time among writing summaries, weekly reports for solving-problems, and lab reports in learning the discipline versus writing professional or research genres); and, b. exploring theoretical approaches to understand genres beyond individual texts to embed them as part of systems of activity 12 (i.e., genre repertoires and genre systems 13 ), which implies to rethink curriculum initiatives and studies under the assumption that students as further practitioners of their disciplines will be exposed to a spectrum of roles as genres users (either as readers or writers) and as part of complex overlapping activities impacted by issues of hierarchy and power within organizations.
Digital technologies
Regarding issues on computer literacy, the publications of the sample mentioned computers only in three cases, but they were related to the same Argentine experience (Añino et al., 2010a;2010b;2012;2013).In this case, computer literacy was not part of the teaching contents (Selber, 2004); instead, computer assignments are means to support students' learning (i.e., writing to learn).
Subject matter of the courses
Speciic experiences of research or pedagogical initiatives for creating data and evidence in scientiic writing in Engineering (Poe, Lerner & Craig, 2010) did not emerge from the sample.However, some Latin-American publications report initiatives in which verbal and mathematical systems are seen as articulated to support student learning, or in which research about Engineering texts implies adopting a multimodal approach (Parodi, 2010c).
In the U.S. scholarship, there is a debate about when and how to teach genres related to workplace cultures such as job-applications and cover-letters (Quick, 2012). he analysis of the Latin-American publications (table 3) suggests that these types of genres are not part of the initiatives/studies of the sample.
Despite the size and type of the sample (22 publications of a convenience sampling), this inding might be explained by features either of the higher education systems (by further analyzing to what extent in Latin-American Engineering majors, students are exposed to internships), or features of written genres embedded in job applications (by further analyzing written genres requested, if any, in Latin-American job markets).
he U.S. scholarship of teaching on science communication and research skills (Ford & Newmark, 2011) might be further explored in the Latin-American case by contextualizing what counts as scientiic knowledge production and to what extent this is related to higher education in general and to Engineering majors in particular.hese particular U.S. experiences for teaching genre practices that merge scientiic and professional audiences (e.g., literature reviews or oral presentations that are addressing simultaneously academic and industry audiences) (Ford & Newmark, 2011) continue nourishing conversations about the rhetorical nature of science and technology (science as argument) (Miller, 2004).In this regard, it is interesting to mention that in the Latin-American case, two pedagogically-oriented publications mentioned as limitation that students struggle in developing audience awareness (López & Ramírez, 2012;2014); this diiculty might be also associated with assigning writing activities that are exclusively addressing instructors (writing to learn).herefore, further research initiatives might collect data to compare how audience awareness is developed over time according to types and amounts of learning opportunities in which students have to address either interdisciplinary, academic, and industry audiences.
Assessment agendas
he U.S. scholarship has acknowledged the importance of conducting assessment of curricula and learning outcomes of the programs according to systems of values of the institutions and majors (Allen, 2010;Coppola & Elliot, 2010).In the case of the Latin-American publications analyzed, the writing advocates are already pointing out the need of conducting developmental research on students writing and genre awareness of faculty members who have been participants of the interdisciplinary writing initiatives in Engineering majors (Natale & Stagnaro, 2012;Stagnaro & Jauré, 2013).
Conclusion
his article aimed at exploring features of Latin-American writing initiatives in Engineering reported by publications (journal articles and oral communications) from Spanish-speaking countries to map programs pursued in the region and then provide a context to envision research agendas for Latin-American Writing Studies in Engineering and the development of Technical Communication Programs in the region.
he data suggest that the Latin-American movement might have started under the approach of writing to learn.Possibly, it might be expected that when these Latin-American initiatives and studies incorporate research and pedagogic interventions on professional writing and communication, novel debates will emerge particularly associated with tensions and goals of educating under humanistic and technical approaches of the courses and also regarding the rhetorical nature of science and technology; this climate might further allow integrating critical approaches in the advancement of the Latin-American ield.
Furthermore, the trends of the analysis reveal that the Latin-American writing advocates in Engineering might broaden research scopes by incorporating theoretical frameworks for a) exploring and understanding diferent roles of writing across time/ curriculum in student learning and by Engineering subields and, b) exploring theoretical approaches to understand genres beyond individual texts (genre repertoires and genre systems).
his analysis also makes visible the need of further exploring features of higher education systems and economic conditions under which science and technology are related to Engineering knowledge production as an important context of the emergence of technical communication programs and initiatives beyond the U.S. Additionally, the study conirms the importance of conducting research on development of writing and communication in Latin-American Engineering majors, in general, and in the actual initiatives already undertaken, in particular, by identifying regional disciplinary learning expectations on writing and communication over time.13.Cf.Spinuzzi, 2012;Bawarshi & Reif, 2010;Russell, 2010;Freedman, 2006;Tardy, 2009;Artemeva, 2006;and, Devitt, Bawarshi & Reif, 2003.
Figure 4
Figure 4 Engineering ields mentioned by the studies/initiatives
9.
What agendas are currently proposed on Engineering writing? he analysis of the topics of the conclusion sections suggests that the three most frequent topics mentioned throughout the publications are: a. Writing to learn is associated with improving student engagement (7 cases); b.Writing to learn is correlating with passing the classes (7 cases); c.Interdisciplinary work is an opportunity (6 cases).
Table 1
Relationship between the research questions and the analytical categories
Table 3
Occurrences of learning moments in which writing is incorporated by the initiatives pedagogically-oriented or genres that are studied in diferent learning moments Colombian Department of Science, Technology, and Innovation (Colciencias), Universidad Autónoma de Occidente (UAO), and University of California, Santa Barbara (UCSB).See the project at: http://www.utp.edu.co/vicerrectoria/investigaciones/publicaciones-lectura-escritura/ 6.In this section some fragments of the actual publications are used to illustrate the analysis.Since most of the original data is in Spanish, the cases were translated into English by the author. | 9,455.6 | 2016-09-27T00:00:00.000 | [
"Engineering",
"Education"
] |
Design and Analysis of a Dual Rotor Turbine with a Shroud Using Flow Simulations
This paper describes the flow simulation of a dual rotor, three-bladed wind turbine module with a shroud to determine its performance. The parameters that were evaluated are the effects of adding a second rotor, wind speed, distance between the two rotors, the size of the front rotor and the shroud. The results were obtained by using the Solid Works 2015 flow simulation program. Also, the benefits and cost issues for wind generating systems are illustrated.
Introduction
Wind energy is a form of solar energy which delivers uneven heat to various parts of the Earth's surface and causes an imbalance in the pressure distribution in the atmosphere.Due to the horizontal pressure gradient, wind is generated from the horizontal movement of the air.In certain applications, wind energy can be utilized and developed as an important energy source.
The first windmill, which was the vertical axis system developed in Persia about 500-900 A.D, was used for tasks of grain grinding and water-pumping.
Vertical-axis windmills were also used in China, which is often claimed as their birthplace [1].In the 1950s, with the discovery of oil in the Middle East that was used to generate modern mechanical and electrical power, the development of wind turbine development began to slow down.Twenty years ago, due to the "oil crisis", there was an energy shortage problem.The instability and limited nature of conventional fossil energy supply led to the development of clean, renewable energy and the emergence of wind power.Energy technology has significantly improved for over the past decades.Iron-ically, there are still over 1.1 billion people who live without access to electricity.
Figure 1 shows that most people who have limited electricity are living in Africa, Asian, and South America.Also, around 2.8 billion people have to use wood, candles, or other biomass to fulfill the basic needs of nutrition, cooking, heating, and lighting [2].Obviously, using biomass will produce a large increase in air pollution, which causes about 4.3 million deaths each year.Even though the environmental awareness has been growing for about a century, environmental pollution is still a problem, such as the global warming, haze, and acid rain.
However, there is one clean, renewable form of energy that uses virtually no water and pumps billions of dollars into our economy every year-that is wind energy.Since 2008, the U.S. wind industry has generated more than $100 billion in private investment [3]; as a result, it is necessary to continue to develop advanced wind turbine technology to supply this growing market.
Wind turbines convert the kinetic energy of the wind into mechanical power, which can then be converted into electricity by a generator.With the advancement of technology, several types of wind turbines have been created.Both horizontal and vertical axis turbines can be used for power generation.The three primary types are horizontal-axis wind turbines (HAWT), Savonius vertical-axis wind turbines (Savonius VAWT) and Darrieus VAWT turbines, as shown in Figure 2.
Evaluation Process
In this paper, four variables-wind speed, distance between two rotors, scale of the first rotor and a shroud, are evaluated for a three-bladed, dual rotor wind turbine.By using a flow simulation program, the effect of these variables on the flow velocity, pressure, and output power of the wind turbine were evaluated.
Four different wind speeds-10, 20, 30, 40 m/s, were used in the simulation module.The spacing between the rotors was 1.5 m, 3 m and 4 m.Also, the smaller rotors added to the model, as shown in Figure 3, were 60% and 80% smaller than the larger rotor, respectively.To evaluate the models in the simulation, three variables were kept constant and one variable was changed.For example, for the smaller front rotor flow velocities, which are the wind flow velocities on the front of the second, larger blade, were evaluated with wind speeds of 10, 20, 30, 40 m/s.The front rotor was 60% smaller than the rear one and no shroud on the dual-rotor wind turbine was installed.Then the effects of the distances of 1.5, 3, 4 m between two rotors were evaluated.The flow simulation, using Solid Works 2015, was used to evaluate the different models.
The analysis for an object which has fluid passing through or around it can be very complicated.Evaluations may include the heat transfer, mixing, unsteady and compressible flows around the object.Instead of manufacturing products to test, by using the CAD-embedded Solid Works Flow Simulation program to evaluate the effects of fluid on the performance of the object during the design phase, problems can be addressed early, reducing the cost and avoiding rework.
All the parts-blade, tower, nacelle, smaller rotor and shroud-can be built in Solid Works 2015 and combined as the SLDASM which was used in this evaluation (Figure 3).
The model of the dual rotor wind turbine was designed based on a Vestas V90-3MW wind turbine (Table 1).A smaller rotor was installed in front of the conventional wind turbine to simulate a dual rotor wind turbine.
The local wind speed can be accelerated by capturing and concentrating the wind through some mechanism, such as a shroud.Installing a shroud was the method used in this evaluation.When a shroud was chosen, the most important factor considered was that the shroud have a long diffuser and brim to create vortexes to reduce the pressure downstream of the rotor to cause more flow into the rotor [4].When the entering flow increased, the velocity of the wind speed across the wind turbine increased, resulting in an increase of output power (Figure 4).However, increasing the lengths of the diffuser and brim could adversely affect the durability of the shrouded wind turbine because of the increases of weight and stresses [5].To determine the diameter of the shroud (Dr), the following formula was used: The next step, after finishing building all the parts of dual rotor wind turbine in Solid Works 2015, was to operate the flow simulation with the following steps: define computational domain, choose the goals, run the active project, insert flow trajectories, and show the results.In order to convert the flow velocities and pressures into numbers, the trajectories of the flows were obtained.The gradients of the velocities and pressures were shown in different colors, so that different levels of velocities and pressures were distinguished.For instance, when using the smaller rotor with 60% reduction with an applied wind speed of 10 m/s, there are two types of images-detailed and overview-as shown in Figure 5 and Figure 6.In Figure 5, the gradients of the velocities are shown in different colors in contrast with the numbers on the left side.Further, the trend of the velocity around the dual rotor wind turbine with a shroud on it can be described as shown in Figure 6.Also, by reading the data shown in the flow simulations, the magnitudes of the velocity and pressure in front and back of the smaller rotor, and front and back of the bigger rotor were obtained.
The Front Velocity of the Large Rotor
One of the most important data obtained was the front flow velocity of the larger rotor (Vf2) which was an important factor in determining the output wind power.2).
Front Velocity Affected by
According to Table 1, Figure 7 and Figure 8, the front flow velocity increased when the wind speed increased.However, the impact of the distance between rotors was not significant.Also, the scale of the rotors had a positive effect.
When the scale of the front rotor was decreased, more air flow was allowed to enter the larger rotor which resulted in more output power.
Front Velocity Affected by Wind Speed, Diameter and Separation
Distance (without Shroud) The shroud was the most important factor that affected the velocity and pressure of the air flow across the wind turbine.In this evaluation, (Vx) speeds of 10, 20, 30, and 40 m/s were used as before, and the spacing between the rotors was changed from 1.5 m up to 4 m.The flow simulations were repeated for each spacing and the results were recorded.In order to show the results more clearly, the results from the flow simulation were converted to numbers, as shown below (Table 3, Figure 9 and Figure 10).
Even though the diameter of the smaller rotor was changed from 60% to 80%, the distance between rotors had a small impact on the flow velocity.However, when the wind speed increased, the flow velocity acting on the rear rotor increased when the smaller rotor was reduced in size.
Front Velocity Affected by Shroud
Since the distance between the two rotors did not have a big effect on the front velocity of the larger rotor, the shroud became an important factor to be evaluated.Using the data in Table 2 and Table 3, rotor separation distance from 1.5 m to 4 m for both reductions, and for a reduced front rotor of 60% and 80% of the bigger rotor.
According to both Figure 11 and Figure 12, a significant influence resulted using the shroud.The front velocity was increased from around 30 m/s to 120 m/s when the wind speeds were applied from 10 m/s to 40 m/s for the module with a shroud on it.The velocity gradient was around 90 m/s, which is much larger than that for the module without a shroud.Therefore, adding a shroud on the wind turbine generates much more flow energy then changing the distance between rotors [6].theoretical maximum power efficiency of any design of a wind turbine is 0.59 (i.e.no more than 59% of the energy carried by the wind can be extracted by a wind turbine) [7].Using the Betz Limit, there is a maximum Power Coefficient, C p , which is the ratio of power extracted by the turbine to the total contained in the wind resource: Then, in this case, the power obtained from the wind speed which acts on the front of the rotor is obtained: As shown in the Table 6 and Figure 13 up to Figure 18, the distance between rotors does not change the output power.The power only increases around 200 kW when the distance is changed from 1.5 m to 4 m.On the other hand, the shroud has a significant effect on the output power.For instance, using a rotor separation distance of 1.5 m and wind speed of 40 m/s, the power difference was around 43,000 kW when using a shroud, which was a significant increase (Figure 13 to Figure 18).Therefore, even though the shroud might add to the overall cost of the wind turbine system, it will provide a significant increase in output power.
EES Programing
Using the velocity and pressure of the back rotor for all the different cases, the power was calculated using the Engineering Equation Solver (EES) program.
Because the flow velocity V is not constant, it was convenient to write an EES Figure 13.Power affected by shroud (Small Rotor 60%).
Figure 11 & Figure 12 were developed with the applied wind speed, ranging from 10 m/s up to 40 m/s, and the
Figure 8 .
Figure 8. Bigger rotor front velocity affected by different scales of small rotor (80%), with Shroud.
Figure 9 .
Figure 9. Larger rotor front velocity affected by different scale of small rotor (60%).
Figure 10 .
Figure 10.Larger rotor front velocity affected by different scale of small rotor (80%).
Then the wind turbine power output is:
P
is the maximum value of p C and equals 16/27.However, there is an alternative way to estimate the annual energy output when a wind turbine system is analyzed.This method is based on the Weibull or Rayleigh distribution method using an average wind speed of V (m/s) at the wind turbine hub's height and a capacity factor (CF).The capacity factor can be determined as: Annual energy (kWh/yr) = (KW) 8760 (h/yr) CF R Actual energy delicered 8760 (h/yr) Average power CF
Table 2 .
Front velocity affected by wind speed, diameter and rotor separation distance (without shroud).
Table 3 .
Front velocity affected by wind speed, diameter, and rotor separation distance (With Shroud). | 2,788.8 | 2017-04-20T00:00:00.000 | [
"Engineering"
] |
The comprehensive quality evaluation of scutellariae radix based on HPLC fingerprint and antibacterial activity
. Scutellaria Radix, a traditional Chinese medicine, studies on its main active ingredient are limited. In this study, the purpose was to investigate the quality difference of Scutellariae Radix from different origins based on chemical components and biological activities. The chromatographic fingerprints of Scutellariae Radix from 33 origins were established using HPLC, and the antibacterial activities were studied with the microdilution method. Moreover, orthogonal partial least-square regression, pearson correlational analysis and grey relational analysis methods were performed to explore the relationship between the compositions and bioactivities. In addition, and origin identification model was established to comprehensively evaluate the quality of Scutellariae Radix. The results showed that Scutellariae Radix had in-vitro antibacterial effect on Staphylococcus aureus, and the best were in Gansu, Shandong Province. Multivariate statistical analysis common showed that three components were positively correlated with antibacterial activity, which were respectively wogonin, baicalein and oroxylin. In conclusion, the quality of Scutellariae Radix varies greatly from different origins, and the better was in Gansu and Shandong Province. This work successfully provides a general model that combined the chromatographic fingerprint and bioactivity assay to study the spectrum–effect relationships, which could be used to discover the primary active ingredients in traditional Chinese medicines.
Introduction
etc. [2] . The main medicinal part of SR is rhizomes, among which baicalin, baicalein, wogonin and wogonoside are its main active ingredients and quality control indicators [3] . In recent years, the wild resources of SR have been declining due to the over exploitation, making it difficult to meet market demand [4][5] . Blind introduction of SR without considering the ecological suitability, making the quality of SR from different producing areas varies greatly. Therefore, it is necessary to evaluate the quality of SR from different geographical origins.
Since traditional Chinese medicine (TCM) had the characteristics of multiple targets and multiple components in the treatment of diseases, it is more difficult to comprehensively evaluate the quality of TCM and excavate the active ingredients. From a macro perspective, the chromatographic fingerprint can reveal the overall chemical characteristics of TCM and solve the one-sidedness of a single index in quality evaluation to a certain extent [6][7] . However, it can not be used to find the main active ingredients of TCM to further explore its pharmacological effects. "Spectrumeffect relationship" can establish a relationship between fingerprint peaks and specific pharmacodynamic indexes, and then this relationship can be used to find the main effect components of TCM to reflect its internal quality [8][9] . This method can reflect the real active ingredients and more comprehensive pharmacological information to a certain extent, so it is more suitable to control the quality of TCM. Some mathematical statistical methods have been applied to find the corresponding active components in the spectrum-effect relationship. For example, Liu et al. explored the spectrum-effect relationship between fingerprints and three pharmacological effects through gray correlation analysis and partial least square regression, and found the main active components corresponding to the main pharmacological effects of Farfarae Flos [10] . At present, the application of this method in the quality evaluation of SR is limited.
To this end, based on the samples collected in China, the microdilution method was performed to detect the difference in antibacterial activity of SR from five main producing areas and determine its chemical fingerprint. Combining the "spectrumeffect relationship research, the main medicinal components of the antibacterial activity of SR were preliminary discussed. In addition, on this basis, an orthogonal partial least squares discrimination analysis (OPLS-DA) identification model for distinguishing different origins was established to comprehensively evaluate the quality of SR collected from different origins.
Materials and reagents
33 batches of SR from five producing areas covering Gansu (GS), Inner Mongoria (NM), Shanxi (SX), Shandong (SD) and Shaanxi (SN) province were collected and authenticated by Prof. Huibin Lin (Academy of TCM, Shandong, China). Table 1 shows the origins of the 33 batches of samples. All samples were sampled according to the rule of quartering method specified in Chinese Pharmacopeia (four part, 2020 edition) and a quarter of samples were selected for retention. The remaining medicinal materials were crushed and sifted through 20 mesh sieve for screening with 65 mesh sieve. The samples were weighed in proportion, bagged and labeled respectively, and then stored in dry room temperature environment before chromatographic analysis. Staphylococcus
Prepration of sample and standard solutions
Accurate weighing SR powders according to the ratio of 0.57 g between 24-65 mesh and 65 mesh sieve, and put it into a round bottom flask containing 50 mL of boiled distilled water. After being refluxed for 40 min, take the additional filtrate as the sample solution. The internal standard solution of puerarin (30 g·mL -1 ) was mixed with the sample solution by equal volume, and filtered with 0.45 μm filter membrane. All the samples were stored in a refrigerator at 4℃ for further analysis.
Methodology validation of fingerprint analysis
Precision was evaluated by six consecutive injections of the sample solution, while repeatability was performed by six replicates of a sample from the same origin. For the storage stability test, the sample solutions were tested in a day (0, 2, 6, 8, 12 and 24 h). Chempattern TM software was used to obtain fingerprint information and verify the methodology test. After calculating the relative peak area value and relative retention time value, results of relative standard deviation (RSD) value of 19 peaks were all less than 3.00%, which was verified by the methodology experiment.
Common peak identification of fingerprints
The chromatographic peak retention time and peak area values of 33 samples were recorded at 280 minutes. Combined with the chromatographic information and literature data of the reference substances, the chemical components of the common peaks were preliminarily analyzed and determined.
Experiments of antibacterial effects
In this study, a blank group, a positive drug group (amoxicillin group), a bacterial solution control group, and an administration group were established. The optical density (OD) value of the 96-well plate was measured by the xMark microplate reader. The samples were placed in a constant temperature and humidity incubator for 18-24 h at 37°C. The microdilution method was applied to determine the antibacterial rate of the sample water extract.
Grey relational analysis
Grey correlation analysis is a method to determine the degree of correlation between factors based on the similarity degree of each factor [11] . Therefore, it can be considered as a simple and effective comprehensive evaluation method of spectrum-effect relationship. In this study, the program for grey correlation analysis was carried out by MATLAB R2017a software. The correlation coefficients between the independent variable (relative peak areas) and the dependent variable (antibacterial rates) was measured with this grey relational analysis model.
Pearson correlation analysis
Pearson correlation coefficient (r) was regarded as a metric of checking the linearity of relationships between different variables [12] . These coefficients were calculated when comparing relative peak areas (X) and antibacterial rates (Y). The analysis was performed by IBM SPSS 20.0 software.
Orthogonal partial least squares regression Analysis (OPLSR)
As a statistical analysis method to find the causal relationship between variables, regression can be used to analyze the relationship between dependent variables and independent variables. It can also be used for predicting the mean value of dependent variables through independent variables [13] . The orthogonal partial least squares regression (OPLSR) method has high applicability when the datasets were small and the correlation was large. In this study, the OPLSR model was established by SIMCA-P+ 14.1 software based on the 19 common peaks and one pharmacodynamic index. The main effective components of the antibacterial effect were found by observing the importance of variables in projection (VIP) and regression coefficient.
Geographical origin traceability based on orthogonal partial least squares discrimination analysis
OPLS-DA is a linear supervised classification method based on orthogonal partial least squares regression algorithm, which can characterize the identification ability of SR samples from different geographical origins based on HPLC fingerprint dataset. In this method, variables with the maximum covariance are found from the content matrix (X) and the classification matrix (Y), and are classified according to the sample score. Y = 1 means that the sample belongs to a specific classification, and Y = 0 means that the sample does not belong to the specific classification [14] . In this study, 28 samples were divided into training sets randomly to build the model, and 5 samples were used as test sets to externally verify the model performance. The model was established based on internal 7-fold cross validation. The stability of the model was evaluated according to some important parameters such as root mean square error of cross validation (RMSECV) and root mean square error of prediction (RMSEP). When the values of these parameters were smaller, it indicated that the model had a better fitting degree [15] .
Results of HPLC fingerprints
HPLC fingerprints of 33 batches SR samples were obtained through the optimization of chromatographic condition. 19 common peaks were marked as P1-P19 according to the range of the retention time. Compared with the standard, they were identified as P1, P2, P3, P4, cynaroside, P6, 5,7,8-trihydroxyflavone, baicalin, P9, chrysin-7-O-glucuronide, norwogonin-7-O-glucuronide, P12, wogonoside, P14, P15, baicalein, wogonin, chrysin and oroxylin. The results of HPLC fingerprints are shown in Fig. 1. The chromatographic peak retention time and peak area values within 280 min were recorded for 33 samples. In order to establish a mathematical model in connection with the antibacterial effect, the relative peak area of the HPLC under bacteriostatic concentration was calculated, as shown in Table 2. Method validation for HPLC fingerprint results shown that the relative standard deviation (RSD) for method precision and repeatability, alone with storage stability of sample solutions within 24 h appeared less than 3.00%. (1)
Results of antibacterial experiments
The micro-dilution method was applied to determine the antibacterial rate of the sample water extract. The results are shown in Table 3. The results showed that good quality was mostly found in GS and SD province, however, Samples from SN province had the worst antibacterial effects. In addition, we found that different batches of samples from the same origin had different antibacterial effects, indicating a great difference in quality.
Results of grey relational analysis
In this study, the grey correlation analysis method was used to determine the contribution of each component of SR to the inhibitory effect of Staphylococcus aureus according to the degree of correlation. Therefore, the antibacterial rate was regarded as the parent sequence, and the peak value of each component was taken as the subsequence. Due to the different units or dimensions of the original data, dimensionless processing was required for each sequence before the correlation analysis. The consistency of dimension was achieved by means of average transformation, and then the calculation was carried out [16][17][18] . The calculation results were shown in Table 4. According to the principle of grey correlation, the component with a higher correlation degree has a more significant influence on the antibacterial effects. The value of correlation degree grater than 0.9 indicates that the sub-sequence has a significant influence on the parent sequence. When was greater than 0.7 and less than 0.8, it meant that there was an obvious influence. When was less than 0.6, there is a very small effect. The components that had significant influence on the antibacterial effect were component 1, (17), oroxylin (19). It provided the theoretical basis for the quality control model based on spectrum-effect relationship.
Results of orthogonal partial peast squares regression analysis
The relative peak area data of the 19 fingerprint peaks were pre-processed by normalization as X matrix, and the antibacterial rate of each sample was taken as Y variable for analysis. The standardized regression coefficient of each variable to dependent variable was obtained, as shown in Fig. 2 (A). Based on the regression coefficient, OPLSR model was established and the regression equation (2) was obtained. The larger regression coefficient indicated that the independent variable had a greater contribution to the dependent variable. The positive value of the coefficient meant that the component had a positive correlation with the antibacterial rate. VIP, variable importance index, used to describe the explanatory ability of the independent variable to the dependent variable [19] . Wold suggested that variables with VIP greater than 1.0 could be considered to have a big contribution to the dependent variable [20] . The results of VIP analysis are shown in Fig. 2 (B).
Results of pearson correlation analysis
In the bivariate correlation analysis, the relative peak area of 19 fingerprint chromatographic peaks of 33 batches of SR and the pearson correlation coefficient of the antibacterial efficacy value were calculated. The results are shown in Table 5. -0. 497 * 0.604** 0.614 ** -0.388 * 0.448 ** Notes: Significant differences (P <0.05) marked as "*", (P < 0.01) marked as "**" The results showed that among the relative peak area of the 19 fingerprint peaks, component 6, component 9, chrysin-7-O-glucuronide (10), component 12, wogonoside (13), component 14, and chrysin (18) were significantly correlated with the value of antibacterial rate (P<0.05). The components of baicalin (8), norwogonin-7-O-glucuronide (11), 15, baicalein (16), wogonin (17), and oroxylin (19) were very significantly correlated with the value of antibacterial rate (P<0.01). In general, wogonin, baicalein, oroxylin could be used as an evaluation index to reflect the positive bacteriostatic effect. Moreover, as shown in Fig. 3, we found that the correlation between the twelve components and antibacterial value were basically consistent with the results of orthogonal partial least squares regression analysis.
The discriminant results of the model training set and the prediction set were further evaluated. According to Table 6, the classification accuracy of the training set was 85%. The cross misclassification phenomenon existed in NM, SN and SX, which indicated that the original HPLC data had redundant information, thus affecting the classification accuracy of the model. In addition, all the samples in the prediction set were classified correctly, which proves that the model has an excellent effect in predicting unknown samples. In a word, the established OPLS-DA model had excellent learning ability and higher prediction accuracy, which was suitable for the origin identification of SR. As shown in Fig. 4, the potential factors were screened using OPLS-DA, and the first three potential factors could interpret 77.73% of the chromatographic information cumulatively. At this time, R 2 was 0.5782 (R 2 > 0.5), indicating a good fitting accuracy of the established model. The values of RMSECV and RMSEP of the models were 0.2190 and 0.2927 respectively, suggesting that the discriminant error was within an acceptable range. In addition, 200 permutation tests (Fig. 5 A) were used to further verify the overfitting of the model. The Q 2 -intercepts intersected the negative half axis of the Y-axis and had a value of 0.4509. The R 2 and Q 2 values generated by each operation were lower than the original R 2 and Q 2 values, indicating that there was no risk of overfitting in the model. The 3D scatter plot shown that GS and SD had great differences in origin, and NM has a certain degree of overlap with SN and SX. This result could also be verified according to the confusion matrix results of the OPLS-DA model (Table 6). According to the screening results of VIP feature variables (Fig. 5 B), Wogonin, Cynaroside, Baicalein, 5,7,8-Trihydroxyflavone and Wogonin-7-O-glucuronide could be regarded as the key indicators of geographic origin identification.
Discussion
At the beginning of building the HPLC detection method was to detect the Yinhuang granules, the raw medicinal materials (Scutellariae Radix, Lonicerae Japonicae Flos) and their extracts at one time. In this method, the relevant components of Lonicerae Japonicae Flos were detected in the first 95 minutes, the SR was detected after 95 min. So we are only showed the latter part of this chromatogram starting at 95 min.
It is important for pharmacodynamics study to select a reasonable data processing method. In the process of data analysis, multiple methods are selected to verify each other, which can make the results more convincing. At present, the most commonly used data processing methods include grey correlation analysis, correlation analysis, cluster analysis, principal component analysis, partial least squares regression analysis, artificial neural network and etc. Grey relational degree analysis can reveal the unknown information according to the known information, which contains the idea of holistic view, so it is suitable for the analysis of complex components of traditional Chinese medicine [21] . At the same time, there may be multicollinearity between the characteristic peaks of traditional Chinese medicine fingerprints, so the common multiple linear regression analysis is not applicable. However, OPLSR can not only simplify data structure and regression modeling, but also have unique advantages for data samples with small sample size and multiple correlation problems between variables [22] . Therefore, OPLSR was used to reflect the relationship between each component of SR and its bacteriostatic rate in this study.
The results shown that baicalein, wogonin, chrysin, oroxylin, 5,7,8-trihydroxyflavone (norwogonin), baicalin, norwogonin-7-O glucoside, cynaroside, component 1, 3, 6 and component 14 contributed significantly to the antibacterial rate, which was consistent with the results of correlation analysis. 5,7,8-Trihydroxyflavone (7), baicalein (16), wogonin (17) and oroxylin (19) were positively correlated with antibacterial activity. The material basis of SR for inhibiting Staphylococcus aureus was preliminarily determined, which coincided with the research of Xing et al. [23] . The least squares support vector machine (LS-SVM) method was used to establish a mathematical model. The model could be used to accurately predict the antibacterial rate of the pharmacodynamic index value by detecting the HPLC fingerprint of SR. The OPLS-DA model based on HPLC data can effectively identify the origin of SR and provide a reliable method for its quality control.
Conclusion
In this study, the spectrum-effects relationship of SR was discussed by combining HPLC fingerprint and antibacterial activity. Results of OPLSR, grey correlation analysis and pearson correlation analysis showed that baicalein, wogonin and oroxylin were the main effective antibacterial component. The structures and actions of substances such as P1, P2, P3, P4, P6, P9, P12, P14, and P15 still require further confirmation, as these components may have the potential to have antibacterial effects. In addition, the origin identification model which could be applied to other samples of Traditional Chinese medicine was also established. In the end, the exact pharmacological mechanism of active ingredients in SR will be studied in the future. | 4,286.8 | 2021-01-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Strategies Shaping the Transcription of Carbohydrate-Active Enzyme Genes in Aspergillus nidulans
Understanding the coordinated regulation of the hundreds of carbohydrate-active enzyme (CAZyme) genes occurring in the genomes of fungi has great practical importance. We recorded genome-wide transcriptional changes of Aspergillus nidulans cultivated on glucose, lactose, or arabinogalactan, as well as under carbon-starved conditions. We determined both carbon-stress-specific changes (weak or no carbon source vs. glucose) and carbon-source-specific changes (one type of culture vs. all other cultures). Many CAZyme genes showed carbon-stress-specific and/or carbon-source-specific upregulation on arabinogalactan (138 and 62 genes, respectively). Besides galactosidase and arabinan-degrading enzyme genes, enrichment of cellulolytic, pectinolytic, mannan, and xylan-degrading enzyme genes was observed. Fewer upregulated genes, 81 and 107 carbon stress specific, and 6 and 16 carbon source specific, were found on lactose and in carbon-starved cultures, respectively. They were enriched only in galactosidase and xylosidase genes on lactose and rhamnogalacturonanase genes in both cultures. Some CAZyme genes (29 genes) showed carbon-source-specific upregulation on glucose, and they were enriched in β-1,4-glucanase genes. The behavioral ecological background of these characteristics was evaluated to comprehensively organize our knowledge on CAZyme production, which can lead to developing new strategies to produce enzymes for plant cell wall saccharification.
Introduction
Fungi can use a wide range of organic compounds as a carbon and energy source. The spatial and temporal availability of these compounds is highly variable in most natural habitats where fungi occur. Not surprisingly, carbon stresses (carbon-starvation stress and carbon-limitation stress) are among the stresses that most affect the life of fungi [1][2][3]. The type and availability of carbon/energy sources highly influence many aspects of fungal life, including the production of carbohydrate-active enzymes (CAZymes), secretion of extracellular hydrolases (other than CAZymes) [2,4], formation of secondary metabolites (including mycotoxins or molecules with pharmaceutical interest) [5,6], as well as their sexual and asexual differentiation, stress tolerance, and even their antifungal susceptibility and pathogenicity [7][8][9][10]. Understanding carbon-starvation stress responses, hence, has great practical importance both in the fermentation industry and in medical mycology.
The plant cell wall consists of several polysaccharides including cellulose (and in smaller portions, mixed-linkage glucan), hemicelluloses (e.g., xylans, xyloglucans, mannans), and pectins [11]. In addition to these, plant cell walls contain different hydroxyprolinerich peptides, such as extensins and arabinogalactan proteins, harboring a substantial portion of saccharide residues [11]. Many fungi are able to utilize these polysaccharides. They secrete extracellular enzymes to cut (usually hydrolyze) these polymers into parts (usually oligomers/monomers), and these smaller molecules are taken up for further degradation/utilization. Efficient degradation of complex saccharides generally needs multiple, synergistically acting enzymes [4,12,13]. Extracellular degradation of polysaccharides lets cells produce the needed amount of enzymes in cooperation. Moreover, the mono-and oligosaccharides liberated extracellularly function as "public goods," since they can be utilized by any microbes in the vicinity, even by those called "cheaters," who have not secreted any enzymes contributing to the degradation. Not surprisingly, extracellular enzyme production of microbes is intensively studied by behavioral ecologists [14]. Efficient saccharification of plant polysaccharides is crucial in the cost-efficient production of second-generation biofuels [15]. Fungi are important sources of various industrially important enzymes including those that can be used for saccharification of complex plant materials. Understanding how fungi adapt to different carbon/energy sources can help to identify new genes or molecules involved in the regulation of enzyme production as well as new (auxiliary) enzymes for efficient degradation of these biopolymers. This knowledge can be used to improve the efficiency of plant cell wall saccharification or to reduce the production cost of the enzymes used [15][16][17].
Aspergillus nidulans, like many other filamentous fungi, commonly occur in habitats rich in decaying plant materials. Its genome contains more than 400 CAZyme genes (Carbohydrate-active Enzymes Database; http://www.cazy.org/; 10 December 2021). Thanks to these, it can grow efficiently on complex plant materials [18][19][20]. Not surprisingly, it is not only a model organism but also a potential source of various enzymes for industrial application [21].
Here, we studied genome-wide transcriptional changes in four A. nidulans cultures differing in the available carbon/energy source: cultures incubated in glucose-rich or carbon-source-free media, as well as cultures growing on lactose or arabinogalactan. We aimed to explain the transcriptional changes of CAZyme genes as well as the physiology of the fungus using the following four strategies connected to the formation and utilization of public goods: (1) "secretion of scouting enzymes" [22][23][24]; (2) "adaptive prediction" in stress responses [25,26]; (3) prevention of the rise of "cheaters" [14]; and (4) Garrett Hardin's "tragedy of the commons" scenario [14].
Strains and Culture Conditions
The A. nidulans THS30 (pyrG89; AfupyrG + ; pyroA4; pyroA + ) reference strain [27] was used in this study. It was maintained on Barratt's minimal agar plates at 37 • C [28] and only freshly collected conidia from 6 d cultures were used for inoculation in all experiments. For submerged cultivation, 500 mL Erlenmeyer flasks containing 100 mL Barratt's minimal broth were inoculated with 50 × 10 6 conidia/flask and were incubated at 37 • C and 3.7 Hz (approx. 220 rpm) shaking frequency for 16 h. The exponentially growing phase mycelia were collected by filtration, washed, and then transferred into 100 mL fresh Barratt's minimal broth. These media contained 20 g L −1 glucose, 20 g L −1 lactose, or 20 g L −1 arabinogalactan (from larch wood, Sigma-Aldrich Ltd., Budapest, Hungary) as carbon/energy source, or did not contain any carbon source at all. All cultures were further incubated at 37 • C and 3.7 Hz shaking frequency.
Detecting Growth, Metabolic Activity, Carbohydrate Utilization, and Formation of Sterigmatocystin
Growth of the cultures was monitored by measuring their dry cell mass (DCM) content [29]. The metabolic activity of the hyphae was characterized by their methylthiazoletetrazolium (MTT) reduction activity as described previously [29]. Glucose, lactose, and arabinogalactan consumption was followed by detecting the reducing sugar content of the media with p-amino-hydroxybenzoic acid hydrazide [30], while sterigmatocystin production was demonstrated by thin-layer chromatography (TLC) [31].
Enzyme Assays
Catalase and superoxide dismutase (SOD) activities were measured by rate assays [32,33] either with cell-free extracts prepared by X-pressing [34] or with the fermentation broth. Intracellular glutathione reductase, nitrate reductase, and β-galactosidase activities were determined according to Pinto et al. [35], Bruinenberg et al. [36], and Nagy et al. [37], respectively, from the cell-free extracts. Extracellular chitinase [34], N-acetyl-glucosaminidase [38], βglucosidase [30], cellulase [30], γ-glutamyl transpeptidase (γGT) [39], and protease [40] activities were determined from the fermentation broth. In the case of cellulase determination, which was based on measuring the increase in the reducing sugar content in the carboxymethylcellulose substrate solution, fermentation broth was dialyzed against 0.1 mol L −1 K-phosphate buffer (pH 6.5) prior to measurement to remove reducing sugars. Protein content of the cellfree extracts was measured with Bradford reagent. In the case of the glucose-containing cultures, all enzyme activities were determined at 4 h to prevent biases caused by the quick decrease in the glucose content of the cultures. In the case of the carbon-stressed cultures, specific SOD, catalase, nitrate reductase, glutathione reductase, and β-galactosidase activities were measured at 12 h. All the other (extracellular) enzyme activities were determined using 24 h cultures, since in the case of these enzymes the activity values were too low in the 12 h cultures for precise determination.
Reverse-Transcription Quantitative Real-Time Polymerase Chain Reaction (RT-qPCR) Assays
Lyophilized mycelia were used to isolate total RNA according to Chomczynski [41]. RT-qPCR assays were carried out with the primer pairs listed in Table S1 using Luna ® Universal One-Step RT-qPCR Kit (New England Biolabs, Ipswich, MA, USA) following the manufacturer's protocol. Relative transcription levels were characterized with the ∆CP (difference between the crossing point of the reference and target gene within a sample) values using the AN6700 gene (putative translation elongation factor) as reference [42].
High throughput RNA Sequencing
Total RNA was isolated from lyophilized mycelia [41] from four different cultures using three biological replicates: For "fast-growing" cultures, exponentially growing phase mycelia were transferred into Barratt's minimal broth containing 20 g L −1 glucose as carbon/energy source and were incubated for 4 h at 37 • C and 3.7 Hz shaking frequency.
For "carbon-starved" cultures, exponentially growing phase mycelia were transferred into carbon-source-free Barratt's minimal broth and were incubated for 12 h at 37 • C and a 3.7 Hz shaking frequency.
For "carbon-limited" cultures, exponentially growing phase mycelia were transferred into Barratt's minimal broth containing either 20 g L −1 lactose or 20 g L −1 arabinogalactan and were incubated for 12 h at 37 • C and 3.7 Hz shaking frequency.
RNA sequencing (from library preparation to generation of fastq.gz files) as well as the RT-qPCR assays was carried out at the Genomic Medicine and Bioinformatic Core Facility, Department of Biochemistry and Molecular Biology, Faculty of Medicine, University of Debrecen, Debrecen, Hungary. A single-read 75 bp Illumina sequencing was performed as described previously [43]. All library pools were sequenced in the same lane of a sequencing flow cell, and 11.5-20.8 million reads per sample were obtained. The FastQC package (http://www.bioinformatics.babraham.ac.uk/projects/fastqc; 10 December 2021) was used for quality control. The hisat2 software (version 2.1.0) was used to align reads to the genome of A. nidulans FGSC A4 (genome: http://www.aspergillusgenome.org/download/sequence/A_ nidulans_FGSC_A4/archive/A_nidulans_FGSC_A4_version_s10-m04-r12_chromosomes.fasta.gz; 1 September 2021; genome features file (GFF): http://www.aspergillusgenome.org/download/ gff/A_nidulans_FGSC_A4/archive/A_nidulans_FGSC_A4_version_s10-m04-r12_features_with_ chromosome_sequences.gff.gz; 1 September 2021;). More than 93% of reads were successfully aligned in the case of each sample. Differentially expressed genes were determined with DESeq2 (version 1.24.0).
Evaluation of the Transcriptome Data
When the transcriptomes of two cultures were compared ("A" vs. "B"), upregulated and downregulated genes were defined as genes that had shown significantly different expression (adjusted p-value < 0.05) and log 2 FC > 1 or log 2 FC < −1, respectively, where FC (fold change) stands for the number calculated by the DESeq2 software using "B" as the reference culture. In the case of carbon-starved cultures as well as cultures growing on arabinogalactan or lactose, "carbon-stress-responsive" genes were regarded as upregulated or downregulated genes coming from comparisons where glucose-containing cultures were used as reference. Genes were regarded as "culture specifically" upregulated (or downregulated) if they were upregulated (or downregulated) relative to the other three cultures.
The composition of carbon-stress-responsive and culture-specific gene sets was studied with gene set enrichment analyses carried out with "Functional Catalogue" (FunCat), "Gene Ontology" (GO), and "Kyoto Encyclopedia of Genes and Genomes pathway" (KEGG pathway) terms using the FungiFun2 package (https://elbe.hki-jena.de/fungifun/fungifun.php; 10 December 2021), applying default settings. Hits with a corrected p-value < 0.05 were regarded as significantly enriched and were taken into consideration during evaluation. Terms containing less than three genes or hits with only one gene were omitted from the analysis.
The enrichment of genes belonging to the groups defined below was tested by the Fisher's exact test with the "fisher.test" function of R project (www.R-project.org/; 10 December 2021). The following gene sets were studied (the composition of the gene sets is available in Tables S2 and S3): "Lactose utilization" genes: This gene group contains genes involved in the Leloir and oxido-reductive pathways of galactose utilization, the galX (AN10543) and galR (AN7610) genes encoding the transcriptional regulators of the above-mentioned pathways [44], as well as known and putative β-galactosidase and lactose permease genes according to Fekete et al. [45,46] and to the Aspergillus Genome database (AspGD; www.aspergillusgd. org; 1 September 2021).
"Extracellular peptidase" genes: Peptidase genes encoding N-terminal signal sequence but not transmembrane domain collected from the AspGD.
"Carbohydrate-active enzyme" (CAZyme) genes: Genes collected from the Carbohydrateactive Enzymes Database (http://www.cazy.org/; 10 December 2021), except the genes presented in the "Cell wall" genes group were omitted. "Ribosome biogenesis" genes, "Mitotic cell cycle" genes, and "Transcription factor" genes: These groups were constructed based on the AspGD using the related GO terms and their child terms. "Secondary metabolism cluster" genes: Manually or experimentally determined secondary metabolite cluster genes collected by Inglis et al. [50]. Gene set enrichment analysis was carried out with the clusters separately. Clusters where the upregulated or downregulated genes were significantly enriched (Fisher's exact test, p < 0.05) in the appropriate gene set were regarded as upregulated or downregulated clusters, respectively.
Identification of Extracellular Proteins
Proteins from the fermentation broth were separated by 2D gel electrophoresis [51]. Selected spots were cut from gels manually, and, after trypsin digestion, peptides were analyzed on a 4000 QTRAP (AB Sciex, Framingham, MA, USA) mass spectrometer coupled to an Easy nLC II nanoHPLC (Bruker, Billerica, MA, USA). The obtained LC-MS/MS data were used for protein identification based on the ProteinPilot 4.5 (ABSciex) search engine, and the SwissProt database and minimum two peptides with 99% confidence were required for identification of the protein [39].
Three Carbon Stress Types Caused Similar Physiological Changes
Physiological consequences of three different types of carbon stress (carbon-starvation stress as well as carbon-limitation stresses in the presence of either a disaccharide or a polysaccharide) were studied. Transferring mycelia from glucose-containing media to carbon-source-free or lactose/arabinogalactan-containing media significantly decreased the MTT-reducing activity of the cultures ( Figure 1A). However, this decrease was temporary; after 4 h, MTT-reducing activity of the carbon-stressed cultures started to increase even in the case of the carbon-starved cultures ( Figure 1A). Carbon stress also reduced the growth of the fungus as expected ( Figure 1B). The highest reduction in biomass production, with respect to glucose-containing cultures, was observed in carbon-starved cultures followed by arabinogalactan and lactose-containing cultures. Although long-term carbon starvation is generally accompanied with DCM decline [34], only a small (but not statistically significant) decrease in the DCM was observed at 12 h in this case ( Figure 1B).
The intracellular SOD activities were increased on arabinogalactan and in carbonstarved cultures. The fermentation broth of all the four cultures had detectable SOD activity and the carbon-stressed cultures had detectable catalase activity as well (Table 1), which concur with the results of Saykhedkar et al. [18] who studied the secretome of A. nidulans cultivated on sorghum stover. Carbon stress increased extracellular proteinase, γGT, chitinase, and β-glucosidase activities (Table 1). Growing on lactose or on arabinogalactan but not carbon starvation also increased extracellular β-galactosidase and cellulase activities (Table 1).
In order to identify further enzymes from the fermentation broth of carbon-starved cultures, extracellular proteins were separated by 2D gel electrophoresis and the protein content of selected patches was analyzed. The presence of the following proteins in the fermentation broth was demonstrated (Table S4): AbnC (AN8007, putative extracellular endo-1,5-α-L-arabinosidase); EglB (AN3418, cellulase); BglA and BglL (AN4102 and AN2828; putative β-glucosidases); ChiB (AN4871, chitinase); EglC (AN7950, putative GPI-anchored glucan endo-1,3-β-D-glucosidase); PepJ (AN7962, protease); CatB (AN9339, catalase); and SodA (AN0241, Cu/Zn-SOD). The presence of AbnC, CatB, EglB, PepJ, SodA, and the AN8445 peptidase in the fermentation broth of lactose-containing cultures has already been demonstrated [39]. van Munster et al. [24] found that upregulation of CAZyme genes by carbon stress is accompanied by the secretion of the corresponding proteins in A. niger even in the case of carbon starvation. Accordingly, the genes of the proteins detected here were upregulated by the appropriate carbon stress (Table S2) with the following exceptions: We could not detect the upregulation of eglB in carbon-starved cultures or the upregulation of sodA and AN8445 in lactose-containing cultures (Table S2). However, we could detect the presence of the encoded proteins in the fermentation broth (Table S4, [39]). In the case of SodA and EglC (detected during carbon starvation), as well as of CatB (detected both in carbon-starved cultures and on lactose) (Table S4, [39]), the corresponding genes even showed downregulation by carbon stress (Table S2). These proteins may be accumulated in the cells (in the case of cell-wall-anchored EglC, in the cell wall) on glucose and may be released into the fermentation broth only under stress.
Transcriptome Analyses Revealed Important Differences among the Carbon-Stressed Cultures
Based on principal component analysis, the available carbon source substantially affected the transcriptome of the fungus as expected (Table S5). In the case of 28 genes, transcriptional changes obtained with RNA sequencing were compared with those recorded with RT-qPCR, and a good positive correlation was observed (the Pearson's correlation coefficient was 0.79) (Table S5).
Carbon-starvation and carbon-limitation stresses changed the expression of many genes ( Figure 2A). Earlier studies demonstrated that there is a substantial overlap among the early stress responses of carbon-limited and carbon-starved cultures [24]. This has been explained by the near-starvation of carbon-limited cultures until utilization of the alternative carbon sources is established [24]. Therefore, we took samples at times when utilization of lactose and arabinogalactan had already started according to their growth pattern ( Figure 1). Nevertheless, we found substantial overlap among the genome-wide transcriptional changes of the three carbon-stressed cultures: 39% of the carbon-stressresponsive genes showed upregulation in all three carbon-stressed cultures, and 38% showed downregulation (Figure 2A). By definition, these same genes showed culturespecific downregulation and upregulation, respectively, in glucose-rich cultures. They also highly exceeded the number of culture-specific down-and upregulated genes in carbon-stressed cultures ( Figure 2B). Gene set enrichment analyses of carbon-stress-responsive genes suggest that carbon starvation downregulated bulk protein synthesis, several elements of primary metabolism (e.g., glucose utilization, amino acid biosynthesis, steroid synthesis), and the transcription of several stress genes. On the other hand, it upregulated genes involved in cell wall organization, chitin, xylan, and pectin degradation, as well as fatty acid oxidation (Table 2 and Table S6). Replacing glucose with lactose downregulated several elements of primary metabolism (e.g., glucose utilization and steroid synthesis); however, in contrast to carbonstarved cultures, downregulation of amino acid biosynthesis or "ribosome biogenesis" and "translation" was not observed. It upregulated extracellular polysaccharide utilization, including the metabolism of pentoses and hexoses other than glucose ( Table 2 and Table S6). Growing on arabinogalactan downregulated glucose utilization, amino acid biosynthesis, and steroid synthesis. As on lactose, downregulation of bulk protein synthesis was not observed. Upregulation of extracellular polysaccharide utilization genes was detected; however, genes involved in pentose or hexose (e.g., galactose, mannose) metabolism were enriched in both the upregulated and downregulated gene sets ( Table 2 and Table S6). Gene set enrichment analyses of culture-specific genes suggest that processes related to glucose utilization and growth (e.g., glycolysis, respiration, biosynthesis of steroids, vitamins, cofactors prosthetic groups) were upregulated, while the polysaccharide catabolic process and lipid metabolism (e.g., fatty acid oxidation) were downregulated in glucose-rich cultures relative to all the other cultures (Table S6). In parallel with this, carbon-stressed cultures were characterized by the upregulation of different elements of carbohydrate catabolism and by the downregulation of a few processes mainly related to growth (Table S6). For further analysis of how carbon-stressed cultures ensure their energy needs, we selected the gene groups presented in Table 3, Table 4 and Table S2: a -"up", "down", and "both" stand for significant enrichment in the upregulated, in the downregulated, and in both the upregulated and downregulated gene sets. When no significant enrichment was observed, the "-" symbol was used. Further details on the behavior of the gene groups are available in Table S2.
In arabinogalactan-containing cultures, araR [53], but not galR or galX, was upregulated. Upregulation of the D-galactose oxidoreductive pathway genes, the same as those upregulated on lactose (Table S2), as well as lacD, lacpA, and lacpB, as well as several other known/putative lactose permease and β-galactosidase genes, was also observed. Interestingly, the Leloir pathway genes were even downregulated in the carbon-starved cultures ( Table S2), suggesting that this pathway may relate to the synthesis of galactose-containing saccharides rather than galactose degradation. The lacpB and lacD genes, and some other putative/known β-galactosidase genes, were upregulated even in the carbon-starved cultures (Table 3 and Table S2), which concurs with elevated β-galactosidase activities detected in all carbon-stressed cultures (Table 1).
Glycolysis genes were downregulated by carbon stress (Table 3 and Table S2) but the downregulation of oxidative pentose phosphate shunt and TCA cycle genes was observed only in carbon-starved and arabinogalactan-containing cultures, which showed the smallest growth in our experiments ( Figure 1B).
Carbon stress significantly influenced cell wall homeostasis (Table 3 and Table S2). In general, downregulation of synthase, transglycosylase, and regulatory protein genes, as well as upregulation of hydrolase genes, was observed, which concurs with the reduced growth of the cultures ( Figure 1B) (Table 3 and Table S2). However, in the case of lactose-containing cultures, which showed the fastest growth out of the three carbonstressed cultures (Figure 1), the enrichment of the above-mentioned genes in the appropriate gene sets was not significant (Table S2). The cell wall hydrolase genes engA (endo-1,3-βglucanase), chiB (endochitinase), and nagA (N-acetyl-β-glucosaminidase), which have been demonstrated to be important in the utilization of the cell walls of dead cells (autolytic cell wall degradation [2,54]), were upregulated in carbon-starved and arabinogalactancontaining cultures (Table 3, Tables S2 and S5). These upregulations together with the upregulation of AN1427 (putative N-acetylglucosamine transmembrane transporter gene), AN1428 (putative N-acetylglucosamine-6-phosphate deacetylase gene), and AN1418 (putative glucosamine-6-phosphate deaminase gene), as well as of AN2424 (putative Nacetylhexosaminidase) (Table S2), suggest that autolytic cell wall degradation takes place not only in carbon-starved, but also in slow-growing cultures. Importantly, transcriptional activation of the cell wall integrity pathway was not observed, supporting the view that the cell wall of only dead cells (the so called "empty hyphae") was degraded and living cells could protect themselves against these hydrolase activities [42]. Previously, we found that melanin production can protect cells against chitinases, including the ChiB chitinase [42]. The chiB gene was upregulated not only in carbon-starved and arabinogalactan-containing cultures, but also in cultures growing on lactose (Tables S2 and S5). Not surprisingly, the Ivo cluster, responsible for N-acetyl-6-hydroxytryptophan-type melanin formation, was upregulated (Tables S6 and S3), together with aromatic amino acid metabolism genes ( Table 2 and Table S6) in all carbon-stressed cultures. Many cell wall hydrolase genes were downregulated in all three cultures (Table S2). In cultures growing on lactose, cell wall hydrolase genes were even enriched in the downregulated gene set (Table 3 and Table S2). These genes may contribute to the biosynthesis rather than the degradation of the cell wall [49], like the chiA gene which was downregulated in all three carbon-stressed cultures (Table S2) and encodes a chitinase involved in cell wall remodeling and maturation during growth [55]. Cell wall hydrolase genes were enriched in the upregulated glucose-specific and carbon-starvation-specific gene sets (Table S2). The two non-overlapping, upregulated gene sets (Table S2) also demonstrate that cells use different sets of hydrolases for cell wall biosynthesis and for cell wall degradation.
Enrichment of upregulated autophagy-related genes was observed only in the carbonstarved cultures (Table S2), which demonstrates that, like autolytic cell wall degradation, autophagy is also an important process that can provide energy sources in the absence of external nutrients.
Carbon stress upregulated extracellular peptidase (protease) genes in all cultures; however, their enrichment in the upregulated gene sets was significant only in carbonstarved and arabinogalactan-utilizing cultures (Table 3 and Table S2). Even in the lactosecontaining culture, 10 putative/known extracellular peptidase genes were upregulated, which was accompanied with high protease activities (Table 1). These data demonstrate that-in cultures used for heterologous protein production-any carbon stress can be unfavorable, due to the supposed intensive proteolytic degradation [56].
Upregulated CAZyme genes were significantly enriched in the carbon-stressed cultures (Table 3, Table 4, Figure S1). Some of them are related to the applied carbon source: e.g., enrichment of the upregulated β-galactosidase genes was observed on both lactose and arabinogalactan, while upregulated arabinofuranosidase and endo-arabinosidase, as well as α-galactosidase genes, were enriched in the upregulated stress-responsive genes of arabinogalactan-containing cultures only (Table 3 and Table S2). Upregulation of AN9166 (putative exo-1,6-galactanase [57]) was also observed only on arabinogalactan (Table S2). However, many upregulated CAZyme genes were (putatively) involved in the utilization of carbohydrates that had not been added to the media: Upregulated β-1,4-endoglucanase/cellulase, β-glucosidase, cellobiohydrolase-cellobiosidase genes were enriched on arabinogalactan, while upregulated xylosidase and rhamnogalacturonan utilization genes were enriched in all three carbon-stressed cultures (Table 3, Table 4, and Table S2). The upregulated culture-specific gene sets contained several CAZyme genes in each of the four culture types (Table 4 and Table S2 and Figure S1). The most CAZyme genes (65 genes) were observed in arabinogalactan-containing cultures, surprisingly followed by glucose-rich cultures (29 genes), then by carbon-starved cultures (16 genes), and lastly by lactose-containing cultures (6 genes) ( Table 4 and Table S2 and Figure S1). Enrichment of genes belonging to many CAZyme subcategories was observed with arabinogalactan-containing cultures (Table 4 and Table S2 and Figure S1). No enrichment was found with the carbon-starved cultures, suggesting that the CAZyme genes upregu-lated specifically in these cultures are distributed among several CAZyme subcategories (Table 3 and Table S2). Interestingly, in lactose-containing cultures only α-galactosidase, but not the β-galactosidase genes, showed enrichment. In fact, out of the β-galactosidase genes, only lacD, encoding the β-galactosidase, which is essential for lactose utilization, showed significantly higher transcriptional activity on lactose than in all the other cultures (Tables S2 and S5). Glucose-rich cultures were characterized by the enrichment of β-1,4-endoglucanase/cellulase genes (Table 4 and Table S2 and Figure S1).
Transcription factor genes were enriched in the upregulated gene sets of carbon-stressed cultures and in the downregulated glucose-specific gene set (Table S2 and Figure 3). Besides the upregulation of galR and galX on lactose and araR on carbon-starved and arabinogalactancontaining cultures, upregulation of clrA responsible for the regulation of cellulase and xylanase production [58], rhaR responsible for rhamnose utilization [59], and brlA and xprG responsible for extracellular peptidase and fungal cell wall hydrolase production [2,54,60] in carbon-stressed cultures are notable (Tables S2 and S5). Many upregulated peptidase, CAZyme, and cell wall genes encode (putative) extracellular enzymes. Accordingly, the gene of HacA [61] transcription factor, responsible for unfolded protein stress response (endoplasmic reticulum stress), was upregulated by carbon stress (Tables S2 and S5). Since secondary metabolism highly depends on the available carbon source [62], we also evaluated the transcriptional behavior of secondary metabolite cluster genes. These genes were enriched in both the upregulated and the downregulated gene sets of all three carbon-stressed cultures ( Table 2 and Table S6). All four cultures had a characteristic set of secondary metabolite cluster genes that showed the highest activity in that culture (Table S6). The upregulated and downregulated clusters showed a similar pattern. The most important clusters upregulated by carbon stress are listed in Table 5. Among them, upregulation of the sterigmatocystin cluster is notable, since all the 26 cluster genes showed upregulation in the three carbon-stressed cultures, and the formation of this mycotoxin was also demonstrated in these cultures with TLC ( Figure 4). Clusters characteristically upregulated or downregulated in a culture are summarized in Table S3. It is notable that although the highest number of downregulated clusters was observed on glucose, there were clusters (Microperfuranone cluster, Pkh cluster, and AN3273 cluster) that showed upregulation relative to all the other cultures under this condition (Table 5 and Table S3). (12) 12/0 12/0 11/0 AN8105 cluster (10) 8/1 9/1 8/1 Pkb cluster (9) 6/0 7/0 8/0 Pkg cluster (6) 6/0 3/0 6/0 Emericellamide cluster (5) 0/0 5/0 1/1 Terriquinone cluster (5) 5/0 5/0 5/0 AN1680 cluster (4) 1/0 4/0 4/0 Penicillin cluster (3) 0/1 3/0 3/0 Ivo cluster (2) 2/0 2/0 2/0 AN9129 cluster (2) 2/0 2/0 2/0 AN9314 cluster (2) 2/0 2/0 2/0 a -Only upregulated clusters where all (in the case of clusters with ≤8 genes) or all but 1 gene (in the case of clusters with >8 genes) showed simultaneous upregulation in at least one comparison are presented. The number of genes involved in the cluster is indicated in parentheses. Further data on the transcriptional behavior of the secondary metabolite cluster genes are available in Table S3.
Discussion
We studied the behavior of four A. nidulans cultures differing from one another in the available carbon/energy source. These cultures contained either glucose (as an easily utilizable monosaccharide), or lactose (as a disaccharide, allowing slower growth than glucose does), or arabinogalactan (as a complex polysaccharide providing only slow growth), or did not contain any carbon sources (Figure 1). Since extracellular proteins are accumulated in the fermentation broth, their detectability highly depends on the biomass/culture volume ratio as well as on the duration of accumulation. Moreover, extracellular proteins can persist in the fermentation broth for generations after the transcription of their genes is downregulated, e.g., proteins secreted during the early stress response when both carbon-starved and carbon-limited cultures are starving can remain in the fermentation broth until the late stress response when their production may be downregulated. In order to avoid these limitations, we used a transcriptome-based approach. Nevertheless, we have to keep in mind that changes in the transcription of a gene do not necessarily cause changes in the abundance of the corresponding protein. For example, the aim of a transcriptional upregulation can be to stabilize the protein level, to reduce its fast decrease [63], or simply to prepare for its fast synthesis when (and if) it becomes advantageous.
Carbon-Starved Cultures Produce Many Different "Scouting" Enzymes
Utilization of extracellular polysaccharides is challenging for microbes because they have to identify the polysaccharides present in the environment in order to secrete the appropriate enzyme or enzyme mix for their efficient degradation. This problem is generally solved by secretion of "scouting" enzymes [22][23][24]. These are "ordinary" CAZymes, which do not completely degrade a polymer, but liberate some oligomers/monomers in a costefficient manner. These latter compounds ("regulatory molecules") are recognized by cells, resulting in increased production of all the enzymes needed for the complete and efficient degradation of the polymer [22][23][24].
In our carbon-starved cultures, many CAZyme genes (107 genes) were upregulated relative to glucose-rich cultures. However, only 16 of them were culture specific, i.e., showed higher transcriptional activity in the studied culture than in the other cultures (Table 4 and Table S2 and Figure S1). The CAZyme subgroups studied were not enriched in the upregulated carbon-starvation-specific gene set and most of them were not enriched in the upregulated carbon-starvation-stress gene set, either (Table 3, Table 4 and Table S2). These properties concur well with the "secretion of scouting enzymes" strategy: A. nidulans, like A. niger [24], secreted several enzymes simultaneously during starvation to search for alternative carbon sources, but did not upregulate complete gene sets needed for the utilization of a polysaccharide.
Interestingly, many of the rhamnoglacturonan degradation genes (12 out of the 16 genes) and almost half of the galacturonan, arabinan, and xylan degradation genes (altogether 25 out of the 54 genes) behaved as scouting enzyme genes (showing upregulation in these cultures), while cellulose degradation genes did not (Table S2, Tables 3 and 4). Unlike cellulose, the non-cellulosic components of the plant cell wall are quite variable; therefore, their recognition may need several different enzymes. After recognition, however, their efficient degradation may be less problematic than that of cellulose with a partially crystalline structure. This may explain the high ratio of scouting enzyme genes in these groups.
In carbon-starved cultures, utilization of stored compounds (e.g., glycogen or even glutathione [27]), autophagy, and autolytic cell wall degradation (Table 3 and Table S2) [2,42] can provide energy sources. Besides the detection of potential carbon sources, autolytic cell wall degradation also needs intensive production of extracellular enzymes. Not surprisingly, early carbon stress response upregulated a set of ribosome biogenesis genes and ER-specific processes (such as ER to Golgi vesicle transport, protein glycosylation, or ER stress response) [42]. Even in the late stress response studied here, the transcription of hacA encoding a transcription factor regulating ER stress response [61] was still upregulated (Tables S2 and S5). Hence, adaptation to carbon stress seems to rely heavily on adapting the behavior of ER. Manipulation of ER activity can lead not only to improved secretion of an enzyme but can also lead to faster adaptation to the applied CAZyme-producing conditions in the fermentation industry. Secreted enzymes and adaptation to carbon stress are both important during fungal infections [8,9,64], which calls attention to antifungal strategies based on disturbing ER activity of the fungus.
Adaptive Prediction Can Be Important in the Regulation of CAZyme Genes on Arabinogalactan
"Adaptive prediction" is a phenomenon commonly used to explain "stress cross protection" in stress biology [25,26]. It means that under stress, cells upregulate stress response elements to cope with the actual stressor and also other elements to prepare for the most likely subsequent stresses. As a consequence, one stressor can increase the tolerance against another stressor. Plant cell wall polysaccharides rarely occur in isolation in the natural habitats of fungi: the presence of one type increases the probability of other types also occurring there. Therefore, it is reasonable to assume that the regulatory molecules formed during the degradation of one type of polysaccharide will upregulate CAZyme genes needed for the recognition and/or the degradation of other possibly co-occurring polymers as well ("cross-upregulation").
The arabinogalactan used in our study consists of a β-1,3-D-galactopyranosyl polymer as a backbone with side-chains including α-L-arabinofuranosyl (at C6), β-1,6-Lgalactobiosyl (at C4 or C6), and 4-O-(α-L-arabinofuranosyl)-β-D-galactopyranosyl (at C6) units (https://www.megazyme.com/arabinogalactan-larch-wood; 10 December 2021). Not surprisingly, several genes encoding enzymes potentially involved in the degradation of this polymer were upregulated, including galactosidase and arabinofuranosidase genes ( Table 3, Table 4 and Table S2). Interestingly, the genome of A. nidulans does not encode any orthologue of A. flavus Af3G β-1,3-endogalactanase [65]. We only observed upregulation of AN9166 (a putative exo-1,6-galactanase gene), but not of galA (β-1,4-endogalactanase) ( Table S2). Galactose and arabinose were most likely utilized via the D-galactose oxidoreductive pathway and the overlapping pentose catabolism pathway [53], respectively (Table S2). Growing on arabinogalactan did not upregulate autophagy genes, suggesting that energy production shifted towards the utilization of arabinogalactan components; however, the transcription of genes involved in autolytic cell wall degradation (e.g., chiB, engA, nagA) was still high (Table S2). Interestingly, arabinogalactan-containing cultures showed bulk upregulation of genes involved in, or putatively involved in, xylan, galacturonan, rhamnogalacturonan, or cellulose utilization as well (Table 3, Table 4 and Table S2). This upregulation cannot be explained by the "secretion of scouting enzymes" strategy: Many of the upregulated genes showed significantly higher transcriptional activity on arabinogalactan than in all the other cultures, and in many subgroups (α-glucosidases, β-glucosidases, β-1,4-endoglucanases, cellobiohydrolase-cellobiose dehydrogenases) more genes showed upregulation on arabinogalactan than in carbon-starved cultures (Table 4 and Table S2 and Figure S1). This property of arabinogalactan-containing cultures could be explained by postulating a cross-upregulation effect of some of the regulatory molecules liberated from arabinogalactan. Similar cross-upregulation between some CAZyme groups has already been documented: Production of certain cellulolytic enzymes is induced in the presence of xylose in A. niger [66], or on arabitol and xylans in Trichoderma reesei [67]. Moreover, since lactose as a β-galactoside disaccharide is structurally more similar to the oligomers formed during the degradation of galactose-containing polysaccharides than to those released during cellulose degradation, the induction of cellulases by lactose in T. reesei [68] and in Acremonium cellulolyticus [69] can also be the consequence of cross-upregulation.
High Lactose Concentration May Activate Strategies to Control the Cheaters
Extracellular degradation of polysaccharides leads to formation of public goods, i.e., extracellular mono-and oligosaccharides freely available for any cells in the vicinity. Since cheaters use public goods but-by definition-do not invest in enzyme secretion, they can easily increase in population [14]. The successful management of this problem is complex. Different mechanisms, including the use of cell-surface-attached enzymes to limit the diffusion of the liberated compounds away, improved uptake of the liberated nutrients, or development of spatial structures to separate enzyme producers from cheaters can be used [14]. Extracellular degradation of biopolymers is commonly controlled by feedback inhibition and repression [4,70]. These negative feedback regulatory mechanisms, together with the quick utilization of the liberated molecules, control the size of public goods and limit the amount of liberated compounds diffused away from enzyme producers. Hence, negative feedback regulation can play an important role in the restriction of cheaters.
Free lactose is rare in nature (commonly occurring in the milk of mammals); however, both the β-galactoside linkage (e.g., in xyloglucans, rhamnogalacturonans, and arabinogalactan-proteins) and α-galactoside linkage (e.g., in galactomannans, galactoglucomannans, and extensins) are common in plant cell wall saccharides [11]. Galactose is also part of the galactomannan (as galactofuran side-chains) and galactosaminogalactan (as α-1-4-linked galactose and N-acetylgalactosamine residues) components of the fungal cell wall [71]. Not surprisingly, the genome of A. nidulans, similarly to those of many other fungi, contains several αand β-galactosidase as well as a few galactanase genes (AspGD ; Table S2) to hydrolyze these polysaccharides and/or the oligo-and disaccharides liberated during their degradation.
Lactose-containing cultures were characterized by the upregulation of genes directly involved in lactose utilization, such as lacD β-galactosidase, lacpA, and lacpB lactose permeases [45,46], as well as the D-galactose oxidoreductive pathway [53] (Table S2). Only 81 CAZyme genes were upregulated on lactose, and all but one of them were also upregulated either on arabinogalactan and/or during carbon starvation (Table 4 and Table S2 and Figure S1). Cells growing on lactose upregulated fewer extracellular peptidase and (fungal) cell wall hydrolase genes than in the two other carbon-stressed cultures (Table S2). Interestingly β-galactosidase genes other than lacD were also upregulated on lactose; moreover, several α-galactosidase genes were also upregulated ( Table 4 and Table S2 and Figure S1). Among the encoded enzymes, LacA is known to hydrolyze lactose but also hydrolyze the terminal β-1,3 and β-1,4 galactofuranosyl residue from oligosaccharides [72]. The lacA gene, unlike lacD, encodes an N-terminal signal peptide sequence suggesting extracellular use of the enzyme (AspGD). Most likely, the majority of the upregulated galactosidase genes are involved in the utilization of galactose-containing compounds other than lactose. Among the genes playing a role in the degradation of galactose-containing polymers (xyloglucan and rhamnogalacturonan degradation genes as well as galactanase, arabinofuranosidase, and endo-arabinosidase genes), fewer were upregulated on lactose than in the two other carbon-stressed cultures (Table 4 and Table S2 and Figure S11). The number of upregulated genes related to the degradation of compounds not containing galactose (β-1,4-endoglucanase, β-glucosidase, cellobiosidase, and cellobiose dehydrogenase genes) was 11 on lactose, i.e., more than those recorded in carbon-starved (8 genes) and less than in arabinogalactan-containing cultures (19 genes) ( Table 4 and Table S2 and Figure S1). These changes together suggest that a high lactose concentration may have imitated a situation where galactose-containing polysaccharides were present in the environment and their degradation was so efficient that the galactose-containing oligomers started to accumulate extracellularly. In this situation, the importance of searching for alternative nutrients, upregulating genes involved in the degradation of possibly co-occurring polysaccharides, activating autophagy, or maintaining intensive autolytic cell wall degradation is reduced. However, cells need to keep the amount of public goods at an appropriate (not too high) level to prevent the rise of cheaters. Therefore, the rate of utilization and the rate of liberation of any potential public goods should be balanced. This leads to a CAZyme profile on lactose where more CAZyme genes involved in the degradation of identified/predicted polysaccharides are upregulated than in carbon-starved cultures, but less than on arabinogalactan (Table 4 and Table S2 and Figure S1).
It is very likely that the effect of lactose highly depends on its concentration and that it acts as a dual regulatory molecule. At low levels, it can upregulate genes involved in (galactose-containing and co-occurring) polysaccharide degradation to enhance their utilization as found in other species [68,69], while at high concentrations it may downregulate them to control oligomer concentration in the environment. Similar behavior was found with xylose which can either upregulate (at low concentration) or downregulate (at high concentration) several xylanolytic and cellulolytic genes in A. niger [73]. Moreover, this dual regulation of cellulolytic enzyme production was observed in A. terreus with xylose, cellobiose, and even with glucose [74]. In the case of Saccharomyces cerevisiae invertase, upregulation of the corresponding suc2 gene by low concentration of glucose was also observed alongside glucose repression at high glucose concentrations [75].
High Glucose Concentration Can Lead to the "Tragedy of the Commons" Scenario
Garrett Hardin's "tragedy of the commons" scenario occurs when the strategy of public goods utilization is beneficial for individuals but not for the community, since it leads to total depletion of public goods. It is commonly explained by the example of a public pasture used by various herdsmen. The best (self-)interest of each herdsman is in adding new animals to their own herd, since the negative consequences of overgrazing are shared evenly, but the benefits of extra animals will accrue to their owners individually. However, the addition of new animals to the different herds, continuing according to this principle of self-interest, will ultimately result in the complete destruction of the pasture by overgrazing [14]. In the case of extracellular polysaccharide utilization, the "tragedy of the commons" scenario can be prevented if microbes balance public goods formation and utilization. Since the best interest of each cell is using as much public goods as possible and not investing in further enzyme secretion to keep the balance (facultative cheating [76]), the fast depletion of public goods is predicted, unless cheaters are controlled.
Although free glucose is not as rare as lactose in the environment, it is not an abundant molecule in the soil or many other habitats of Aspergillus species. The majority of glucose occurs as monomers of different αand β-glucans. Regarding the plant cell wall, cellulose, mixed-linkage glucan, xyloglucan, and glucomannan are the most common glucose-containing compounds [11], while in the case of the fungal cell wall, β-1,3and α-1,3-glucans are notable [49].
In cultures growing on glucose, glycolysis genes were upregulated, while autophagy and autolytic cell wall degradation genes were downregulated relative to the carbonstarved cultures (Table S2). Extracellular peptidase genes as well as CAZyme genes also showed low transcriptional activity (Table S2). Interestingly, some CAZyme genes reached the highest transcriptional activity on glucose: Altogether, 29 genes had significantly higher activity in these cultures compared to all three carbon-stressed cultures (Table 4 and Table S2 and Figure S1). Among them, agdB (α-glucosidase) [77], bglJ (β-glucosidase) [57], xgcA (xyloglucanobiohydrolase) [78], as well as four putative β-1,4-endoglucanase genes (AN1041, AN6786, AN7891, and AN8068), are notable. These genes encode enzymes that are involved, or putatively involved, in the degradation of glucose polymers. Importantly, we could detect low cellulase activity (but not β-glucosidase activity) in the fermentation broth of glucose-containing cultures ( Table 1). Induction of both endoglucanase and βglucosidase activities was recorded on (low concentration of) glucose in A. terreus [74], and β-glucosidase secretion on glucose was also detected in different Aspergillus species [42,79]. The above-mentioned characteristics of cultures growing on glucose (e.g., the numerous upregulated culture-specific genes involved in β-glucan degradation) resemble those observed with lactose. We can assume that a high glucose concentration also imitates a situation for A. nidulans when it grows on a polysaccharide and its degradation is so efficient that monomers have accumulated in the environment. Therefore, cultures enhanced their glucose utilization, controlled glucose liberation, and downregulated processes aimed at searching for alternative nutrients or to degrade predicted polysaccharides other than glucans; these cultures also downregulated autophagy or autolytic cell wall degradation. Downregulation (low transcriptional activity) of genes encoding extracellular enzymes (e.g., peptidases or plant and fungal cell wall hydrolases) on glucose was more obvious than on lactose, suggesting that cheating was a more beneficial strategy on glucose than on lactose. This may be explained by noting that cells can utilize glucose much faster than lactose. Preventing the depletion of glucose as public goods would thus need very intensive glucan degradation, and the high enzyme activity needed for it represents a high cost that is not favorable for cooperation. This leads easily to the "tragedy of the commons" scenario and the rise of facultative cheating.
A good strategy for (facultative) cheaters is investing in fast vegetative growth. Growing, as an autocatalytic process, needs high levels of glucose as energy and a carbon source itself; in addition, the newly formed cells will also utilize glucose for their further growth. As a consequence, fast growing allows cells and their progeny to utilize more from public goods than other, slower-growing cells. Not surprisingly, genes related to the utilization of glucose and vegetative growth, including glycolysis genes, aerobe respiration genes, 2-oxocarboxylic acid metabolism, and ergosterol biosynthesis genes, as well as genes involved in the biosynthesis of vitamins, cofactors, and prosthetic groups, and additionally those involved in nitrogen, sulfur, and selenium metabolism or cell wall homeostasis were enriched in the upregulated glucose-specific gene set (Tables S2 and S6). Antioxidant enzyme genes were also enriched in the upregulated glucose-specific gene set (Table S2). This concurs well with aerobic utilization of glucose leading to the formation of reactive oxygen species, and supports the idea that fast growth based on an aerobic metabolism can be dangerous for microbes [80]. The downregulated glucose-specific gene set was even bigger than the upregulated one ( Figure 2B). It was enriched with extracellular peptidase and CAZyme genes (other than β-1,4-endoglucanase genes) (Table S2). Moreover, the highest number of downregulated secondary metabolism clusters was also observed on glucose (Table S3). The enrichment of transcription factor genes in this set ( Figure 3B and Table S2) (even though some of them may encode a negatively acting regulator) suggests that many processes active in carbon-stressed cultures were downregulated on glucose. In summary, cells did not only upregulate many genes on glucose to enhance glucose utilization but also downregulated numerous unnecessary genes/processes (including many elements of secondary metabolism) which could otherwise reduce their growth rate and hence the quick utilization of public goods.
Out of the four cultures we studied, the glucose-rich cultures contained the most genes that reached their highest or lowest transcriptional activity under the studied conditions ( Figure 2B). This is important because when we compare two transcriptomes, the obtained difference necessarily depends on both of them. Previously, we suggested that the recorded (stress) response equally characterizes how the cells have adapted to the conditions before the (stress) treatment and how they are trying to adapt to the new conditions [81]. It is also true when the consequences of more than one treatment are compared. The size and composition of gene sets upregulated or downregulated in all the treatments (core stress response genes) are highly dependent on the chosen reference cultures. In our study, if we choose the glucose-containing cultures as "stress-free" reference cultures, we can obtain a large core stress response gene set ( Figure 2). The huge number of co-regulated genes suggests that carbon starvation and limitation stress responses are similar. However, it also means that the cultures growing on glucose are quite different from the other cultures studied. On the other hand, if we select, for instance, carbon-starved cultures as the reference, the difference among the "carbon-source-induced stress" responses would be much smaller, and the nature of the core stress response genes would also be very different. Finding a good reference condition has primary importance in transcriptome-based studies and our selection should depend on the question(s) we want to answer. In the case of the Aspergillus species, choosing glucose-rich cultures as a reference can be practical in general, but is usually not a good choice if we want to understand how they respond to different treatments in their natural habitats or in the human body [26,63]. In contrast, in the case of the Saccharomyces species, where glucose feast is common in their sucrose-rich habitats due to their high periplasmic invertase activity [75], studying stress responses on glucose seems to be not only convenient due to the intensive growth, but reasonable as well.
Concluding Remarks
A. nidulans, as a typical soil-borne fungus, usually grows on decaying plant materials. Regulation of the several hundred CAZyme and other genes needed for the efficient degradation of plant biopolymers is a complex process. "Regulatory molecules" formed during the degradation of these polymers play a central role [4,12,13,70]. They are important in the recognition of the compounds present (see "secretion of scouting enzymes" by carbonstarved cultures), they can have cross-upregulatory activity (see "adaptive prediction of the available biopolymers" on arabinogalactan), and at high concentrations, they can downregulate genes which can be important in the control of cheaters (see lactose-containing cultures), or can even lead to the rise of facultative cheaters (see glucose-rich cultures). Detailed investigation of both the molecular and behavioral ecological aspects of this complex regulatory network in the future can help us to understand CAZyme secretion of fungi better, which may enhance their industrial application. For example, systematic recording of CAZyme co-regulations can draw our attention to less-studied hydrolases which can enhance the degradation of plant cell wall polysaccharides in the industry as auxiliary enzymes. Studying fungal strategies based on adaptive prediction for polysaccharide degradation can lead to the discovery of new regulatory molecules. Knowledge of these molecules or understanding their dual (up-and downregulatory) nature can promote the development of new, cost-efficient fermentation processes for CAZyme production.
Regulation of CAZyme gene expression is a good example of cells initially recognizing a stress (change), but not yet "knowing" how to adapt to it. Hence, as a first step they give a "scouting response" and use the collected information to drive further steps in the right direction. It is reasonable to assume that stress responses other than the carbonlimitation stress response may also involve different kinds of "scouting elements" to direct the response in the appropriate direction. This would explain why fungi can commonly adapt even with artificially modified genetic backgrounds, and even to very different artificial environments.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jof8010079/s1, Figure S1: Effects of carbon sources on the upregulation of selected CAZyme gene groups. Table S1: RT-qPCR primer pairs used in the study. Table S2: RNAseq data of selected gene groups. Table S3: RNAseq data of secondary metabolite cluster genes. Table S4: Secretome data of selected proteins. Table S5: RT-qPCR data of selected genes and principal component analysis of the RNAseq data. Table S6: Results of the gene set enrichment analyses.
Conflicts of Interest:
The authors declare that they have no conflict of interest. | 11,576.6 | 2022-01-01T00:00:00.000 | [
"Biology",
"Engineering"
] |
ADAPTIVE TRIMMED MEAN AUTOREGRESSIVE MODEL FOR REDUCTION OF POISSON NOISE IN SCINTIGRAPHIC IMAGES
A 2-D Adaptive Trimmed Mean Autoregressive (ATMAR) model has been proposed for denoising of medical images corrupted with poisson noise. Unfiltered images are divided into smaller chunks and ATMAR model is applied on each chunk separately. In this paper, two 5x5 windows with 40% overlapping are used to predict the center pixel value of the central row. The AR coefficients are updated by sliding both windows forward with 60% shift. The same process is repeated to scan the entire image for prediction of a new denoised image. The Adaptive Trimmed Mean Filter (ATMF) eradicates the lowest and highest variations in pixel values of the ATMAR model denoised image and also average out the remaining neighborhood pixel values. Finally, power-law transformation is applied on the resultant image of the ATMAR model for contrast stretching. Image quality is judged in terms of correlation, Mean Squared Error (MSE), Structural Similarity Index Measure (SSIM) and Peak Signal to Noise Ratio (PSNR) of the image with latest denoising techniques. The proposed technique showed an efficient way to scale down poisson noise in scintigraphic images on a pixel-by-pixel basis.
ABSTRACT: A 2-D Adaptive Trimmed Mean Autoregressive (ATMAR) model has been proposed for denoising of medical images corrupted with Poisson noise.Unfiltered images are divided into smaller chunks and ATMAR model is applied on each chunk separately.In this paper, two 5x5 windows with 40% overlap are used to predict the center pixel value of the central row.The AR coefficients are updated by sliding both windows forward with 60% shift.The same process is repeated to scan the entire image for prediction of a new denoised image.The Adaptive Trimmed Mean Filter (ATMF) eradicates the lowest and highest variations in pixel values of the ATMAR model denoised image and also average out the remaining neighborhood pixel values.Finally, power-law transformation is applied to the resultant image of the ATMAR model for contrast stretching.Image quality is judged in terms of correlation, Mean Squared Error (MSE), Structural Similarity Index Measure (SSIM) and Peak Signal to Noise Ratio (PSNR) of the image with latest denoising techniques.The proposed technique showed an efficient way to scale down Poisson noise in scintigraphic images on a pixel-by-pixel basis.The highest correlation 0.9706, PSNR 10.023 and MSE 25.902 is achieved by the proposed technique.
INTRODUCTION
Scintigraphic or nuclear images allow investigation of the human body by injecting photons from radioactive products and detecting its radiation time.The radioactive product that is associated with a labelled biological molecule is called a tracer that gathers in the organ/tissue of interest after an injection into the blood.The planar gamma camera can be used for capturing radiation emitted by the radioactive product, this imaging technique is called scintigraphy [1].In nuclear medicine, scintigraphic images are used for investigation of some organs, regardless of their poor resolution.Poisson noise is one of the important sources of deterioration in scintigraphic images [2].Numerous techniques have already been used for reduction of Poisson noise in scintigraphic and Single Photon Emission Computed Tomography (SPECT) images.The degradation in quality of such tomographic images is caused by detection efficiency, attenuation, collimator correction, and scatter of gamma rays.
These factors cause the output image to have low contrast, high noise levels, and poor spatial resolution [3].In tomography, filtering techniques are considered very important for image enhancement.Reduction of noise can be achieved before reconstruction, which is called pre-filtering, or during or after reconstruction, which is known as post-filtering [4].
Adaptive autoregressive (AR) filters can be used for removal of Poisson noise in scintigraphic images [5].The AR filter is further improved to scale down noise from 3-D reconstructed data of scintigraphic images and also from SPECT images [6].It is important to apply the best AR filter for the projection data of scintigraphic images, because a small change in the projection data may cause a large change in the estimated transaxial image [6].
SCINTIGRAPHIC IMAGE ACQUISITION
The Gamma camera invented by Anger is used for capturing radiation emitted by the radioactive product [7].The main components of the gamma-camera for scintigraphic image acquisition are discussed in the following subsections.
The Collimator
The collimator puts gamma rays in one direction to reach the crystal; unlike light, gamma rays cannot be concentrated using lenses [8].Incoming gamma rays can be sensed by the collimator and the light generated by the cooperation of the gamma rays and crystal can be converted into an electronic signal by photomultiplier (PM) tubes and preamplifiers [9].Parallel hole, pinhole, converging, and diverging are all different types of collimator.The most popular collimator is the parallel-hole collimator, which retains the dimensions of an image.In the case of non-parallel collimators, the divergence or convergence nature of the collimator and geometrical disposition are the controlling factors of an image dimension and the cause of geometric distortion [7].Holes are detached by the lead "walls" are called septa.Resolution/sensitivity depends on the collimator thickness, hole diameter, and septal thickness.Generally, the collimator thickness is 0.3-1.4millimeters and the hole diameter is 1.8-3.4millimeters [10].
The Scintillator Crystal
In the gamma camera, crystals usually consist of thallium-activated sodium iodide (NaI (T1)).Edges and the front side can be protected from outside moisture and light by coating it with a thin aluminum (Al) layer.Atomic number (Z) and high mass density make the crystal desirable.Important properties of the crystal are high energy resolution, high detection efficiency, and low decay constant time [11,12].Crystals generally having a thickness of about 1 cm can detect photons with energies up to a few hundred keV.The energy accumulated in the crystal is proportional to the number of light photons produced by interaction of crystal and gamma rays.The light guide of the glass is optically coupled to the rear side of the crystal, which protects the crystal and directs the light photons to an array of photomultiplier tubes [10].
The Photomultipliers (PM) Tubes
The PM tube is responsible for converting photon energy diffused by the crystal to an electrical signal [11].This is accomplished by the consolidation of several elements placed in a vacuum to permit the flow of electrons.The first element is a photocathode placed in connection with the crystal.Light photons extract electrons on metal foil on the photocathode.These electrons are taken captive to the first dynode due to the application of a high voltage between positively charged and the photocathode.The electrons' expedition sanctions them to extract a much more immensely colossal number of electrons from the dynode.The same phenomenon is repeated on several other cascading dynodes [13].
PROPOSED MODEL
The proposed model consists of three major steps: an Autoregression (AR) model, Adaptive Trimmed Mean Filter (ATMF), and a power-law transformation.Fig. 1 shows a step-wise block diagram of the proposed technique.The steps of the proposed method are as follows: 1. Two 5x5 windows with a 40% overlap are used to predict the center pixel value of each corresponding window.2. AR coefficients for both windows are calculated using a Forward Backward Prediction (FB) method as shown in equations ( 2) and (3). 3. Using corresponding AR coefficients and neighborhood pixels of each window to predict the central pixel value according to equation (1).4. Using average of AR coefficients of both 5x5 windows to predict the pixel value of the overlapping region.5. Slide both windows forward with a 60% shift and update AR coefficients by following step 2. 6. Repeat steps 2 through 5 to scan the complete noisy image and update their predicted values denoted by X_pred.7. X_err is calculated by using equation (4) to preserve edges and getting predicted image I_pred.8. Now, a 7x7 window of AR denoised image (X_pred) is taken, slid pixel-by-pixel.9.The mean and variance for each 7x7 individual window is calculated.The overall variance from local variance is computed.10.The adaptive mean of an image is computed using equation ( 5) to make boundaries prominent on its true position.11.The pixel values of the 7x7 window are arranged in ascending/descending order, the ten upper and ten lower outlier pixels are trimmed and the mean of the remaining pixels is computed using equation (6).Ten pixels are trimmed on both sides as it gives good results after performing different experiments with trimming a different number of pixels.12.The power law transformation is applied using equation (7) for contrast stretching to improve the visual quality of the image.
Fig. 1: Block diagram of the proposed denoising model.
AR Model
In the first step, the AR model filter is applied in which each pixel of the image is regressed on its neighborhood pixel values called the prediction region in AR model.The variable of interest in the AR model is predicated using linear combination of the surrounding values of the variables.The AR models are linear prediction models that split an image into two additive components, a predictable image and a prediction error image.In the AR model, no past values of the model input are used [14].In this research, a new 2dimensional adaptive autoregressive model for filtering of scintigraphic images is introduced.An AR process ( 1 , 2 ) can be expressed as [15].
where ( 1 , 2 ) are the weighting coefficients, indices 1 and 2 define the type of prediction region in a two dimensional array ( 1 , 2 ) matrix, and ( 1 , 2 ) represents prediction error, that is, the difference between the original value and the predicted value in this pixel.Predicted image is the image obtained by applying the AR model on the original image .AR coefficients for both windows are calculated using a Forward Backward Prediction (FB) method.
In scintigraphic images, the same model cannot be applied to the entire image as it consists of large local spatial variations, therefore, the model must be adapted according to the variations.That is why the image is divided into smaller chunks and the AR model is separately applied to each chunk.In this method, two 5 x 5 windows with a 40% overlap are used to predict the pixel value of the central row.If more or less than 40% overlap ratio is selected, the predicted values will come closer to the previous or next pixel values.In order to keep a balanced correlation with the previous and next pixel values, this overlapping ratio is selected experimentally.The AR coefficients on both windows are computed using a Forward Backward Prediction (FB) method as follows.
The forward predictor model predicts a sample x(m) from a linear combination of P past samples x(m−1), x(m−2), . ..,x(m−P).
where the integer variable m is the discrete time index, ⏞ () is the prediction of x(m), and are the predictor coefficients.
Similarly, we can define a backward predictor, that predicts a sample x(m−P) from P future samples x(m−P+1), . .., x(m) as where the integer variable m is the discrete time index, ⏞ ( − ) is the prediction of ( − ), and are the predictor coefficients.
Using corresponding AR coefficients and four closest neighborhood pixels of the window to predict the central pixel value according to equation (1), both windows are slid forward with 60% shift and AR coefficients are updated.The same process is repeated to scan the whole image for prediction of the new denoised image.The AR model changes the nature of Poisson distribution somehow, which looks like a Gaussian distribution.Adaptive Trimmed Mean Filter (ATMF) is applied to the resultant image, which gives better results in terms of reduction in Poisson noise.
is calculated by following equation to preserve edges.
where is the noisy image, is the predicted image and is the prediction error image.The error image is averaged out by using the averaging filter to sum up with the predicted image for edge enhancement.
An example of a two blocks with 40% overlapping is represented in Fig 2.
Adaptive Trimmed Mean Filter (ATMF)
ATMF is applied to the output denoised image of the AR process.The aim of applying ATMF is to remove lowest and highest variations in the pixel values and average out the remaining neighborhood pixel values.The whole image is divided into 7 x 7 smaller blocks and the local mean and variance of each block are computed.ATMF for an × image is given by the expression: [16] where (, ) and (, ) represent the output and input images, respectively.The is the local trimmed mean, 2 is the overall noise variance, and 2 is the local noise variance.If 2 is close to zero, it produces an output very close to the input image (, ).Likewise, if 2 ≫ 2 it also produces an output pixel close to (, ).Otherwise, this filter outputs a pixel close to the local average. for an × image is given by expression in: [16] where ∝ is the total number of maximum brightest and darkest trimmed pixels and ′(, ) is the sum of remaining pixels.ATMF removes abnormal pixel variations, preserves boundaries at their true position, and also removes blurriness effects [17].
Power-Law Transformation
The power-law transformation is applied to the resultant image of the ATMF for contrast stretching and to improve visual quality.The power-law transformation can be expressed as follows [17]: where c and γ are positive constants having values 1 and 3 respectively and r is pixel intensity value of the image.
SIMULATION AND DISCUSSION
The proposed model is applied on artificial scintigraphic images having different image statistics corrupted with Poisson noise.The AR model provides a good result with 40% overlap but at the cost of some blurring in the image, which is further improved by applying the ATMF.Power-law transformation is applied for contrast stretching.This technique maintains the high resolution and also preserves the edges along with the noise reduction.The efficiency of the proposed technique is compared with the AR combination of median and Wiener filter and advanced filter, i.e.Non Local Mean (NLM) filter [18] in terms of visual quality, correlation, MSE, PSNR and SSIM.The proposed technique produced better results in terms of edge preservation though the AR process & adaptive trimmed mean filter produced a smoothing effect, while the edge loss is observed in the case of the conventional filters such as median filters.The quantitative performance measures such as correlation, MSE, PSNR and SSIM are used to check the performance of Poisson noise reduction filtering techniques.Experimental results show that the proposed technique performs significantly well than many other conventional & recent filtering techniques like median, Wiener and NLM filters, respectively.The aim of scintigraphic image filtering is to restrain statistical noise while sustaining contrast and spatial resolution [19].The proposed technique simultaneously provides both efficient noise reduction and good spatial resolution for scintigraphic images.
A renal scintigraphic image, artificial scintigraphic image, and transaxial slice of the Zubal phantom [20], denoised by the proposed model and other comparative methods, are shown in Fig. 3, Fig. 4 and Fig. 5, respectively.The proposed method shows good visual results.Figure 6 shows the correlation comparison of the proposed method with median + AR, Wiener + AR and NLM filter.The graph clearly shows that the proposed method produced much better results than median + AR, Wiener + AR and NLM filter at high noise.NLM shows a good result at low noise but as noise varies from low to high, its results degraded.The graph is computed for different noise variations.Figures 8 and 9 show PSNR and SSIM comparison of the proposed method with other methods on different noise variation.The graph clearly shows that the proposed model is more efficient than other compared methods.The aim of the proposed method is to deal with the noise at high level.At low level noise, the conventional filters, e.g., Median and Weiner filter perform well but they are failed to eradicate high variation of noise.One can easily eliminate low level noise by simply using these conventional filters.The proposed method is specifically designed to deal with the high variation of noise.That is the reason that the proposed method beats all other methods at a high level of noise.
CONCLUSION
The proposed model shows an efficient way to scale down the Poisson noise in scintigraphic images on a pixel-by-pixel basis.Edge preservation through calculation of error image after the predicted image through AR model and smoothening through ATMF to remove lowest and highest variations in pixel values, are the major contributions of this research work.
Fig. 3 :
Fig. 3: Renal scintigraphic image (a) Noise-free image (b) Noisy image (c) Denoised image by median filter with combination of AR model (d) Wiener filter with combination of AR model (e) NLM Filter (f) Proposed technique.
Fig. 4 :
Fig. 4: Artificial scintigraphic image (a) Noise-free image (b) Noisy image (c) Denoised image by median filter with combination of AR model (d) Wiener filter with combination of AR model (e) NLM Filter (f) Proposed technique.
Fig. 5 :
Fig. 5: Transaxial slice of the Zubal phantom (a) Noise-free image (b) Noisy image (c) Denoised image by median filter with a combination of AR model (d) Wiener filter with a combination of AR model (e) NLM Filter (f) Proposed technique.
Figure 7
Figure7is plotted for MSE comparison of the original and the denoised image[17].The MSE result of the proposed method is significantly better than median, Wiener combined with AR, and NLM filters.MSE is represented mathematically as
Table 1
validated that the proposed method achieved the highest performance in term of MSE, PSNR, and Correlation.
Table 1 .
MSE, PSNR and Correlation comparison with different methods | 4,008.2 | 2018-12-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Cutoff AdS$_3$ versus the $T\bar{T}$ deformation
A recent proposal relates two dimensional holographic conformal field theories deformed by the integrable $T\bar{T}$ flow to AdS$_3$ with a finite radial cutoff. We investigate this proposal by studying perturbative correlation functions on the two sides. For low point correlators of the stress tensor, we successfully match the deformed CFT results at large central charge to bulk results obtained in classical pure gravity. The deformed CFT also provides definite predictions for loop corrections in the bulk. We then include matter fields in the bulk. To reproduce the classical bulk two-point function of a scalar operator we show that the deformed CFT needs to be augmented with double trace scalar operators, with the $T\bar{T}$ operator yielding corrections corresponding to loops in the bulk.
Introduction
Recently, McGough, Mezei, and Verlinde [1] proposed an intriguing extension of the AdS 3 /CFT 2 correspondence. On the bulk side, the boundary lies not at asymptotic infinity, but instead at a finite radial position. The dual quantum field theory is no longer conformal, but is rather described by a CFT deformed by the remarkable T T operator of Zamolodchikov [2]. The bulk side of this proposed duality is interesting in that the ability to move the boundary inward could shed on the important question of the emergence of bulk locality; the notion of introducing a cutoff boundary surface in this context has arisen in earlier work, e.g., [3,4], and also in relation to the fluid-gravity correspondence, e.g. [5]. In particular, [6] and (more explicitly) [7] both show that such cutoffs are dual to some deformation of the orginal CFT. Recent work on this and related duality proposals include [8,9,10,11,12].
The particular deformation proposed by [1] is especially interesting: the T T operator is irrelevant in the renormalization group sense, yet the deformed theory appears to be far more predictive than the generic non-renormalizable QFT. For these reasons, it is worthwhile to see to what extent the setup of [1] can be elevated to a full-fledged holographic correspondence, complete with a well defined dictionary for relating observables on the two sides, and this is the focus of the present work.
The proposal of [1] was not so much derived as motivated based on observing a nontrivial correspondence between several quantities computed on the two sides, in particular involving the deformed energy spectrum of certain states and the propagation speeds of small perturbations around thermal states. Let us first review some key aspects of the T T deformation in QFT. Given any 2d QFT with a local stress tensor, the composite operator T T can be defined in a canonical way up to derivatives of local operators [2]. The deformed action, S(λ), is stipulated to obey dS dλ = d 2 x √ gT T . Assuming that the undeformed theory is a CFT, as reviewed below we can equivalently say that the trace of the deformed stress tensor obeys (up to derivatives of local operators) A remarkable consequence [13,14] is that the exact λ dependence of the energy spectrum can be written down explicitly. Taking the theory to live on a spatial circle of circumference L and using dimensional analysis to write energy eigenvalues as E n = 1 L E n (λ/L 2 ) one can establish the differential equation 1 where p is the momentum of the state. The solution yields where E n,0 is the energy of the state in the undeformed theory. One of the main observations of [1] was that the formula (1.3) arises in pure AdS 3 gravity by considering the quasilocal energy [15,16] defined on a surface at finite radial location r. The expression appearing under the square root above indeed exhibits a marked similarity to the function appearing in the standard form of the BTZ solution. In our conventions, the quasilocal energy is given as E = 1 2π dφ √ g φφ u i u j T ij , where u i is the timelike unit normal to the integration surface, and T ij is the usual boundary stress tensor [15,17], (1.4) Here g ij is the boundary metric, K ij is the extrinsic curvature, and ℓ is the AdS scale. Evaluated in BTZ on a surface of fixed r, the quasilocal energy turns out to match (1.3) under the identification λ = 4Gℓ π ; L = 2πr. Note that our conventions differ from those of [1] which instead give λ = 4Gℓ πr 2 and L = 2π, though both result in the same dimensionless ratio λ L 2 . This dimensionless ratio is the only physical measure of the 'distance' that the boundary has been moved into the bulk. We henceforth set ℓ = 1.
As for propagation speeds, if one considers a QFT state in the deformed theory with constant T ++ and T −− , then small perturbations of the stress tensor can be shown to propagate at speeds (1.5) The same propagation speeds arise in pure AdS 3 gravity by considering perturbations that preserve Dirichlet boundary conditions on the cutoff surface [5,18].
The following observations are useful to understand the origin and generality of these correspondences. First of all, if we use coordinates such that ds 2 = dρ 2 + g ij (ρ, x)dx i dx j with a cutoff surface at fixed ρ, then the ρρ component of the Einstein equations is We then note that this equation applied to the stress tensor (1.4) is easily seen to imply the key trace relation (1.1) under the identification λ = 4G/π, assuming a flat boundary metric. This observation suffices to explain the agreement of the propagation speeds, since these can be obtained by studying linearized perturbations of the conservation equation To further explore the proposed correspondence, we consider the computation of stress tensor correlation functions on the two sides, focussing on two and three point functions.
Elementary considerations on the QFT side yield results for the two-point functions to order λ 2 , and we find, for example, T zz (x)T zz (0) = c/2 z 4 + 5π 2 λ 2 c 2 6 1 z 6 z 2 + O(λ 3 ). On the bulk side, we adopt the standard AdS methodology of relating boundary stress tensor correlators to the variation of the on-shell action with respect to the boundary metric. Perhaps surprisingly, this leads to the result that two-point functions are exactly the same whether the boundary is at a finite radial location or off at infinity as usual 1 ; in particular we have T zz (x)T zz (0) = c/2 z 4 where c is the usual Brown-Henneaux central charge. Does this conflict with the presence of an order λ 2 correction on the QFT side? Under the correspondence, λ = 4G/π ∼ 1/c, and so we see that the λ 2 c 2 contribution is down by a factor of 1/c compared to the leading term. On the bulk side, this corresponds to a suppression by a factor of G, which implies that it is a one-loop effect and therefore not seen by our classical analysis. So our results are not in conflict with the proposed correspondence provided one compares results order by order in 1/c, recalling that λ ∼ 1/c.
To test this further, and in particular to check agreement between quantities that do receive corrections in λ, we next turn to three-point functions. Consider the representative examples T zz T zz T zz and T zz T zz T zz . These both vanish in the undeformed CFT, but get contributions of order λc 2 in the deformed theory, and the explicit results are easily computed in conformal perturbation theory. Since λc 2 ∼ c ∼ 1/G we expect these results to agree with a classical bulk computation, and we indeed establish precise agreement. We similarly establish agreement for all stress-tensor three-point functions at this order.
As in the case of the two-point functions, the three-point functions in the QFT also receive higher order corrections in 1/c, and the prediction is that these should match the corresponding loop diagrams in the bulk. It would of course be interesting to verify the one-loop (and higher) agreement, but we leave this to future work.
On the QFT side, the trace relation (1.1) is an exact operator statement for any deformed CFT, and the energy spectrum (1.3) is similarly an exact relation governing the change in energy of all states in the original CFT. On the other hand, the statements made on the bulk side so far apply only to pure gravity solutions. But what happens when there are nontrivial matter fields in the bulk? One can first ask whether the duality can persist as before, with the boundary QFT still being just a T T deformed CFT. It is easy to see that this does not work. In terms of our previous discussion, the new issue is that the Einstein equations now pick up an extra matter stress tensor term. So instead of getting (1.1) in the bulk we get T i i = −4πT T − t ρ ρ , where t ij is the matter stress tensor. There is no reason for t ρ ρ to vanish, so there is a conflict. Similarly, the energy flow equation (1.2) now gets a contribution from t t t . The quasilocal energy as a function of radial location can still be worked out explicitly in the case of a static solution, but the result is a much more 1 The commutator part of this result follows from the symplectic structure computations of [19]. complicated dependence on the radial coordinate, and the simple relation between λ and r is lost.
To gain more insight we consider scalar two-point functions. In the bulk we compute the two-point function for a free scalar field with Dirichlet boundary conditions on the cutoff surface. At long distance this goes over to the usual AdS correlator, but there is an infinite series of corrections to this result. To reproduce these we need to add to the QFT action a series of double trace operators built out of the operator O dual to the bulk scalar. The presence of such additional terms in the action is consistent with our statement above regarding the change in the trace relation and energy spectrum. We also study the effect of the leading order T T perturbation on the two-point function in the QFT, which turns out to yield both power law and logarithmic corrections to the correlator. These corrections, perhaps supplemented by other interactions that involving both the stress tensor and scalar operator, should correspond to one-loop graviton corrections in the bulk, by the same logic as in the stress tensor correlators.
To summarize our findings, it appears to us that the duality proposed in [1] can successfully relate stress tensor correlators in the T T deformed CFT to the corresponding correlators computed in pure gravity with boundary conditions at a finite location in the bulk, although this statement remains to be checked at loop level. On the other hand, the situation is more complicated once bulk matter is introduced. Interactions above and beyond those of T T need to be introduced, and essentially fixed by hand to reproduce bulk results.
T T review
We begin by reviewing salient features of T T deformed conformal field theories [2,13,14].
We define the stress tensor via the metric variation of the Euclidean action, Given a general 2D QFT, we can define the bilocal operator On the flat metric ds 2 = dzdz, to which we now restrict unless stated otherwise, this reduces to As shown in [2], this operator exhibits a remarkable OPE structure as x → y, The (possibly divergent) functions A α (x − y) multiply y-derivatives of local operators. We can use this relation to identify the local operator T T (y) as O(y) modulo derivatives of other local operators. Another way to say this is that provides an unambiguous and UV finite definition of the integrated operator d 2 x √ gT T (x), and we adopt this definition henceforth. Starting from a generic QFT with Euclidean action S 0 , the T T deformed action is defined via the equation subject to the boundary condition S(0) = S 0 . Importantly, the T T operator appearing on the right hand side is defined in terms of the stress tensor corresponding to the action S(λ). Hence the equation (2.6) implies a nonlinear λ-dependence for S(λ). We can imagine solving (2.6) by starting with a given S(λ), computing the stress tensor of that theory, and then using (2.6) to obtain S(λ + δλ).
In general, for a theory with a single mass scale µ dimensional analysis yields A CFT deformed by T T has the single scale 3 λ = 1/µ 2 , and so the relation (2.6) yields Strictly speaking this result holds only under the integral since the right hand side is only defined up to total derivatives. However, (2.8) is correct as written to first order in λ since the operator product defining T T is nonsingular when the stress tensor is that of a CFT.
Deformed free scalar action, and Nambu-Goto
It is instructive to carry out this procedure at the classical level starting from the action for free scalar fields (this was done in [14]) First consider a single scalar field, and write the ansatz The defining equation (2.6) becomes a differential equation for F (z) which is readily solved as The case of multiple scalar fields requires a more general ansatz, This leads to a partial differential equation for F which turns out to have a solution corresponding to the action The relation (2.8) is readily verified. Up to an additive constant, the action (2.13) is recognized as the Nambu-Goto action written in static gauge, as is made manifest by writing so that The Nambu-Goto action exhibits manifest SO(N + 2) global symmetry, along with reparametrization invariance. The SO(N + 2) symmmetry is nonlinearly realized in the gauge fixed form (2.13) due to the need to perform a compensating reparametrization to maintain static gauge. There is no obvious a priori connection between the T T deformation and the existence of this global symmetry. Of course, our discussion of the free boson theory has been purely classical, and at the quantum level one encounters the usual issues regarding the quantization of the Nambu-Goto action outside the critical dimension. This discussion is most naturally phrased in the language of effective strings (see [20] for a very clear review of the relevant issues), in which a series of higer derivative terms are added to S(λ). Based on physical considerations, namely that effective strings appear as solutions of Lorentz invariant theories -e.g. as QCD strings or Nielsen-Oleson vortices -one expects that there exists a quantization of (2.15) that preserves the SO(N + 2) symmetry.
Returning to the single scalar theory (2.11), now in Lorentzian signature, the Hamiltonian is This illustrates that the choice of sign for λ is quite significant; taking λ > 0 implies a rather unusual constraint on the phase space in order to preserve reality conditions. From this perspective, λ < 0 appears rather more conventional than λ > 0.
In terms of the energy and momentum of the undeformed theory, a configuration of constant π φ and φ ′ on a circle of length L has energy This illustrates that for λ > 0 configurations of sufficiently large E 0 for a given p render the energy complex in the deformed theory. The expression (2.18) is a classical version of the general quantum result, which we now review.
Energy spectrum
We start with a CFT on a spatial circle of size L, and assume the theory has a discrete spectrum. Assuming λ is the only scale present in the T T deformed theory, the energy of the nth state can be written The momentum p is integer quantized in units of 2π/L and so does not change with λ. As shown in [13,14] the following differential equation holds The solution yields where E n,0 = E(0)/L is the energy of the state in the undeformed theory. This agrees with the previous classical result (2.18). We emphasize that the assumptions going into the result (2.21) are quite minimal, essentially just that the T T deformed CFT exists as a theory with a single scale λ.
AdS 3 gravity with a radial cutoff
We now turn to the gravity side of the correspondence. Most of the following section is a rederivation of results in [1] from a slightly different point of view.
Basic formulas
The action for pure gravity in AdS 3 is We work in Euclidean signature, and henceforth set the AdS radius to 1: ℓ = 1. Our curvature conventions are that R(AdS 3 ) = −6. h ij is the metric on the boundary. In a coordinate system such that the metric takes the form the extrinsic curvature is It is also useful to note that after integration by parts the action takes the form Einstein's equations R µν − 1 2 Rg µν − g µν = 0 in the coordinate system (3.2) take the form The on-shell variation of the action, δS = 1 4π d 2 x √ hT ij δh ij yields the stress tensor which obeys ∇ i T ij = 0 by virtue of the field equation E ρ i = 0.
Trace relation
Recall that on the CFT side the trace relation T i i = −4πλT T is equivalent to (2.6) which fixes the form of the deformed action. So we would like to see this relation appearing on the gravity side as well. Using the definition of the boundary stress tensor (3.6), together with the constraint equation E ρ ρ = 0 we compute This implies that on a flat boundary metric we have We emphase that (3.8) holds for any solution of the Einstein equations with a flat boundary metric. In [1] the relation between the deformation parameter λ and bulk quantities involves the radial location r of the cutoff surface, whereas the relation (3.9) involves no such thing. T i i and T T are both coordinate independent objects, so the relation between them cannot involve an arbitrary radial coordinate. However, a radially dependent expression for λ will emerge naturally below when we consider the spatial circle to have a specified size L.
Propagation speed
The fact that under the dictionary (3.9) we get the same trace relation in CFT and gravity immediately implies that we will get agreement for the propagation speed of stress energy perturbations. This follows because the propagation speed is derived using just the conservation equations and the trace relation. Namely, on a flat metric ds 2 = dzdz these equations are (3.10) Upon linearizing these equations (after converting to Lorentzian signature) around a background of constant T ij it is straightfoward to show that perturbations propagate at in agreement with results stated in [1]. The superluminal nature of these speeds for λ > 0 has been discussed in [5,18]. In the bulk this can be understood simply as coming from the coordinate transformation needed to put the metric on the constant r surface in standard form.
Energy spectrum
We now consider the Euclidean BTZ metric It is convenient here to focus on the dimensionless "proper energy" where u i is the unit normal to a constant t slice of the boundary, (3.15) L = dφ √ g φφ is the proper size of the spatial circle on the boundary. We have L = 2πr and λ = 4G π . We now compute Another way to arrive at this conclusion is to observe that the flow equation (2.20) follows from Einstein's equations. For simplicity, consider the case p = 0 corresponding to a static metric. Writing a general static metric in the form ds 2 = dr 2 /f (r) 2 +g(r) 2 dt 2 +r 2 dφ 2 , and without using the Einsten equations we compute E = L 2 2π 2 λ 1 −r −1 f (r) . The Einstein equation E t t = 0 is then easily seen to be nothing but (2.20).
Correlation functions in the deformed CFT
A CFT deformed by T T has an action that obeys dS dλ = d 2 x √ gT T , which implies where S 0 and [T T ] 0 are the action and perturbing operator of the undeformed CFT. We now ask whether it is sensible to compute correlation functions in the deformed theory. The obvious issue is that since we are perturbing the CFT by an irrelevant (in the RG sense) operator, we potentially have to deal with all the issues associated with non-renormalizable theories. In particular, we could take the effective field theory point of view, imposing a UV cutoff and computing correlators in the presence of an infinite number of counterterms, each with an arbitrary coefficient. However, if we restrict attention to correlation functions of the stress tensor the situation is much more favorable and it is possible to draw some universal conclusions. We confine our analysis to perturbation theory in λ; the definition of correlators at the non-perturbative level is of course a much more difficult question. The first point to recall is that the operator d 2 x √ gT T defined as in (2.5) is finite and unambiguous, and so no dependence on a renormalization scale enters. Next, let's recall the standard statement that conserved currents, such as the stress tensor, are not renormalized in perturbation theory. The usual argument for this (e.g. chapter 4 of [21]) involves deriving a Ward identity by making an infinitesimal symmetry transformation in the renormalized path integral. The Ward identity takes the schematic form 1) where δφ is the transformation generated by the current. Since the correlators on the right hand side are those of renormalized fields they are by definition finite, and hence so too is the left hand side. Thus, the current J µ obtained by applying Noether's theorem to the renormalized action has the property that ∂ µ J µ has finite correlators with renormalized fields. Thus, up to the possible addition of an identically conserved vector operator, the same is true of J µ . We should note that such identically conserved vector operators can indeed make an appearance; for example they do so for the U(1) current in QED, where the operator is V µ = ∂ ν F µν [22].
Here we are concerned with the stress tensor, corresponding to J µ = T µν ξ ν , where ξ ν is a Killing vector. We will assume that there exists no identically conserved tensor that can mix under renormalization with the stress tensor; this should hold generically, since any such operator would need to have scaling dimension equal to precisely 2. Under this assumption, the stress tensor defined in the usual way from the renormalized action will have finite correlators with renormalized fields. In particular, combining this with the statement about the finiteness of the deforming operator, we conclude that stress tensor correlators are finite and independent of renormalization scale (they do of course depend on the dimensionful scale λ).
We should note that this argument assumes the existence of a stress tensor obeying the Ward identity. However, a complete argument should give a prescription for defining this object. We would like to define the stress tensor as the variation of the action with respect to the background metric, but the issue here is that the T T perturbation is only unambiguously defined on a flat background. Therefore, we leave to the future a definitive answer to the question of whether all stress tensor correlators can be computed perturbatively in λ without ambiguity. Here we will only consider low point correlators at the first nontrivial order in perturbation theory, where this subtlety does not appear to arise.
Two-point functions
We first place general constraints on the form of the two-point functions; these hold equally well in CFT and in the bulk. On the metric ds 2 = dzdz and in the presence of the T T deformation, dimensional analysis, and translation/rotational symmetry imply and then also by symmetry we have
(4.3)
Here the dimensionless variable y is Demanding stress tensor conservation implies (4.5) Since we have a CFT perturbed by an irrelevant deformation, correlators should go over to their CFT values at long distance, which implies that we are looking for solutions with boundary conditions
Two-point functions in deformed CFT
It is now simple to work out the deformed CFT two-point function to order λ 2 . We consider operators at distinct points corresponding to ignoring possible contact terms. Recall that we have the operator equation T zz = −πλT T , and so where the stress tensors appearing on the right hand side are those of the undeformed CFT. So we can compute T zz (x)T zz (0) to order λ 2 by using the above relation and evaluating correlators in the undeformed theory. This gives f 4 (y) = π 2 c 2 4y 2 + . . . Higher order corrections can be worked out in conformal perturbation theory.
Note that there are no corrections at order λ. The correlator T zz (x)T zz (0) would seem to get an order λ contribution by bringing down one λT T interaction vertex, but the corresponding integral turns out to vanish.
Three-point functions in deformed CFT
We start by considering a couple of examples that can be easily computed. The simplest nontrivial three-point function result is the order λ contribution to the correlator T zz (x 1 )T zz (x 2 )T zz (x 3 ) . We simply use the relation (4.7) to obtain (4.10) A slightly less trivial example is provided by the order λ contribution to the correlator T zz (x 1 )T zz (x 2 )T zz (x 3 ) . Using √ gd 2 x = 1 2 d 2 z, ∂ z 1 z = 2πδ 2 (z), 4 and repeatedly integrating by parts, we have A little thought reveals that all three-point functions at order λ are fixed by simple considerations. Consider a correlator involving T zz . We can evaluate this to order λ by using (4.7) and the undeformed correlators. Noting the symmetry under z ↔ z, this just leaves the T zz T zz T zz , which we worked out in (4.11), and T zz T zz T zz . But the latter correlator clearly has no order λ contribution, since the T zz operator in the interaction term has nothing to contract against.
Correlators in cutoff AdS
We now turn to the computation of stress tensor correlation functions in the bulk. We assume the same basic framework as in standard AdS correlator computations. Namely, we compute the on-shell bulk action as a functional of the metric on the boundary, and then obtain correlators by taking functional derivatives, .
This prescription has the important virtue that diffeomorphism invariance of the action implies that these correlators obey the correct conservation laws / Ward identities. We consider metrics of the form ij (y, z, z)dx i dx j + . . . , (5.2) and perturbatively solve the Einstein equations subject to the boundary condition We are placing the boundary at y = 1; there is no loss of generality in this choice in the sense that any fixed y surface can be brought to y = 1 by a coordinate transformation that preserves the background metric.
Two-point function
We read off the two-point function via where as usual The Einstein equations are E µν = R µν − 1 2 Rg µν = g µν = 0. E yµ are constraint equations, and once imposed at y = 1 they are automatically obeyed for all y by virtue of the "dynamical" equations E ij = 0.
To compute the two-point functions we need only consider the Einstein equations to first order in ǫ. The dynamical equations E ij = 0 are To compute T zz T zz we set h zz = h zz = 0. The constraint equation E yy gives The remaining constraint equations are (5.10) zz ǫ we read off the correlator from (5.4) as where c = 3/2G is the Brown-Henneaux central charge. On the other hand, since f (1) zz and f (1) zz are both local functions of h zz the corresponding correlators T zz (z)T zz (z ′ ) and T zz (z)T zz (z ′ ) vanish up to contact terms.
Recalling the analysis in section 4 we see that this computation is sufficient to fix all the two-point functions, and in particular we find that besides the result (5.11) and the corresponding result for T zz T zz all other two-point functions vanish up to contact terms, a result which is easily verified by repeating the previous computation for the other cases. So at this order the two-point functions are precisely those of a CFT and show no sign of the λ deformation. We note that the commutator part of this result follow from the symplectic structure computed in [19].
As we discussed in the introduction, this makes perfect sense when we recall that our classical analysis just gives the contribution to correlators proportional to c. Since λ ∼ 1/c, the correction terms appearing in (4.9) are of order c 0 , and hence correspond to one-loop effects in the bulk. In order for a classical computation to exhibit λ dependence, we need to turn to the three-point functions.
Three-point functions
We follow the same strategy to compute three-point functions. The intermediate algebra is a bit messy and unilluminating so we do not show all details.
We first consider T zz (x 1 )T zz (x 2 )T zz (x 3 ) . We proceed by activating h zz and h zz and extracting the contribution to T zz proportional to the cross term h zz h zz . Using T zz = − 1 8G ∂ y g zz ǫ 2 we find where the f (1) are given by the same linearized computation as appeared in the two-point function computation, Recalling that in our two-point function computation we had T zz = 1 4G f (1) zz ǫ (along with the corresponding result for T zz ) we see that (5.12) implies 14) in agreement with the O(λ) CFT result written in (4.10). This agreement is not a surprise, as it follows from the fact that our computations respects the trace condition T i i = −4πλT T , but it is a good computational check.
Next we consider T zz (x 1 )T zz (x 2 )T zz (x 3 ) . We turn on h zz and study the response of T zz at second order. We find which we solve as This yields which agrees with (4.11) upon using λ = 4G/π and c = 3/2G. We can now argue that all three-point functions will match at this order. Given any correlator involving an insertion of T zz , we compute it by evaluating T zz in the presence of sources for the other operators; by sources we mean variations of the boundary metric. Since there is no source for the T zz whose value we are computing, we can use the Einstein equations for a flat metric to replace T zz → −πλT T , and then use known results for the λ 0 correlators. Such correlators therefore match those in the deformed CFT, since the same logic applies there. This just leaves the correlator in (5.17), which we found to match, along with T zz T zz T zz . But there is no order λ correction to this correlator at the classical level, and so we just have the undeformed CFT result, which matches the CFT at this order.
Including matter in the bulk
What makes the T T deformation on the CFT side especially interesting is its universality: it can be applied to any 2D QFT. However, on the bulk side our discussion has so far been limited to solutions of pure gravity with a negative cosmological constant. The obvious question is whether the appealing dictionary relating the two sides can be extended to the case where nontrivial matter fields appear on the bulk side. We first give general arguments that the correspondence must be modified when we consider bulk solutions in the presence of classical matter fields that deform the geometry.
To start with, the Einstein equations written in (3.5) in the presence of matter are where t µν is the matter stress tensor. We are supposed to solve (6.1), along with the matter field equations, subject to Dirichlet boundary conditions on a cutoff surface. The natural Dirichlet problem is to hold the metric fixed at the boundary, here taken to be a flat metric, and to demand that matter fields are constant on the boundary. More precisely, we demand that the matter fields on the boundary are invariant under coordinate transformations of the boundary. These conditions ensure that the boundary stress tensor, defined exactly as before according to (3.6), is covariantly conserved. This is because covariant conservation follows from diffeomorphism invariance: provided the matter data on the boundary obeys δ ξ Φ = 0, and then conservation follows upon integration by parts.
As we have noted, the basic equation defining the T T deformation in CFT is the trace relation T i i = −4πλT T , and we saw that in the bulk this followed from the Einstein equation E ρ ρ , valid in the absence of matter. In the presence of matter we have E ρ ρ = −4Gt ρ ρ which leads to In general, under our boundary conditions there is no reason why t ρ ρ should vanish; for instance a scalar field that varies only in the radial direction will generically yield t ρ ρ = 0. Therefore, we find that imposing Dirichlet boundary conditions yields a stress tensor that does not respect the defining equation governing the T T deformation.
To elaborate on this, we can generalize the previous energy computation to include matter fields. This can be done quite explicitly if we assume a static rotationally symmetric configuration for the metric and matter fields. The bulk metric can be found explicitly in terms of the matter stress tensor; this is essentially the content of Birkhoff's theorem in this context. Defining the radial coordinate via g φφ = r 2 , we find It is evident that there is no simple correspondence with the CFT result (2.21); in particular λ cannot be related in any simple way to the radial coordinate r. We also note that the computation of the propagation speed of perturbations will not match, since this agreement was based on the trace relation agreeing between the two sides. Rather than pursue this avenue further, we instead turn to a discussion of free scalar fields propagating on a pure gravity background.
Scalar correlators
Our goal here to see what must be done on the QFT side of the duality in order to reproduce the simplest matter correlation function in the bulk, namely the two-point function of a free scalar.
Two-point function of bulk free scalar
The scalar two-point function is computed as usual except that we impose Dirichlet boundary conditions on a surface at finite y = y 0 , where the metric is ds 2 = (dy 2 + dx i dx i )/y 2 . The on-shell action is The wave equation has plane wave solutions We assume a generic mass so that ν is typically not an integer. The action is then corresponding to the two-point function We recall that the Bessel function has an expansion for small argument of the form The correlator then has the structure where the functions g k (p 2 ) (which are easily worked out) are analytic at the origin. We suppressed the dependence on y 0 , since we can in any case set y 0 = 1 by a coordinate transformation that preserves the metric. The leading piece as p → 0 is (p 2 ) ν , which is the result in the undeformed CFT. We can also examine the short distance behavior. Recall that K ν (x) ∼ π 2x e −x (1 + a x + . . .) as x → ∞. This yields φ(p)φ(−p) ∼ 1/p, corresponding to a 1/x short distance behavior. Quantum corrections to this result are expected to be important.
Scalar two-point function in CFT with double trace perturbations
In AdS/CFT the dual to a free scalar of mass m in the bulk is a "generalized free field": a scalar operator O of dimension 2h set by m 2 = 4h(h − 1) whose correlation functions factorize into products of two-point functions. In a CFT, the two point function We would now like to reproduce the bulk two-point function computed on a cutoff surface in the bulk. As noted by [7], this can be accomplished by adding double trace interactions to the action. We essentially reproduce their work below.
It is convenient to work in momentum space and normalize our scalar operator in the original CFT such that We then include a general double trace term in the action The two-point function in the deformed theory is easily computed by thinking of (2.4) as an interaction and using the assumed factorization of correlators. Summing the geometric series gives Comparing with (7.6) we can choose f (p 2 ) to get agreement, but the needed function is necessarily non-analytic at the origin, where the f k (p 2 ) are analytic. Note that they are also of order 1; i.e., they are not suppressed by powers of c. This gives another argument that such effects cannot be reproduced by the TT deformation alone. The above non-analyticity implies non-locality in position space. This may be disappointing in comparison with the TT deformation, but is expected from e.g. the known failures of boundary-correlator micro-locality and strong subadditivty of entropy [23] that arise when the bulk is subjected to a strict radial cutoff. This particular non-locality was found previously in both [6] and [7].
Effect of T T deformation
We now consider the first order correction to the scalar two-point function due to the T T deformation. Here it is easier to proceed in position space. Using the CFT result, as fixed by the OPE, the first order correction to the two-point function is (7.12) Introducing a Feynman parameter the integral is The integral is UV divergent and so we apply dimensional regularization: d 2 x → d d x, yielding 4π (x 1 − x 2 ) 6 (2 ln[π(x 1 − x 2 ) 2 ] − 5 + 2γ) + O(ǫ) .
In bulk language, this result corresponds to a one-loop correction due to a graviton loop, since λ ∼ G. We also note the appearance ln(x 1 − x 2 ), whereas we saw above that the tree level correlator contains only power law terms for generic mass.
Conclusion
In this work we have further explored the cutoff AdS / T T deformed CFT duality. Some of the basic features of the deformed CFT, namely the trace relation T i i = −4πλT T and the flow equation for energy eigenvalues, can be readily identified as components of the Einstein equations for pure AdS 3 gravity. A powerful fact is that these results on the QFT side are universal, applying to any deformed CFT and any energy eigenvalue of such a theory (at least any eigenvalue that evolves smoothly from the original CFT).
However, typical QFT states map not to pure gravity configurations in the bulk but rather to solutions with nontrivial matter fields present. The presence of matter fields requires one to modify the CFT beyond just deforming it by T T . In particular, the full deformation must be manifestly non-local in position space in order to reproduce the momentum-space non-analyticity noted below (7.10). This may be disappointing, but was nevertheless to be expected from e.g the well known failures of boundary-correlator microlocality and strong subadditivity of entropy [23] known to occur when the bulk dual is subjected to a strict radial cutoff and is precisely what was argued in [6] and [7].
We also considered stress tensor correlation functions, focussing on two and three point functions computed to leading nontrivial order in the deformation parameter λ. Here we found agreement, provided one compares results at the same order in 1/c perturbation theory, taking into account that λ ∼ 1/c. An obvious task for the future is to extend this statement to all correlators computed at the classical level in the bulk. The deformed CFT also makes definite predictions for bulk correlators at loop level, and verifying these would be interesting, as it involves the novel question of quantum gravity effects in a space with a boundary at a finite location. One could also consider mixed correlators involving both the stress tensor and scalar operators, and the associated question of what new operators need to be added to the QFT to match bulk results.
Finally, we should note that in [1] the authors have discussed defining a deformed CFT via an alternate Hubbard-Stratanovich type construction, and it would be useful to understand how correlators computed in that theory are related to those studied here. | 9,723 | 2018-01-08T00:00:00.000 | [
"Physics"
] |
Comparative Analysis of Financial Sustainability Using the Altman Z-Score, Springate, Zmijewski and Grover Models for Companies Listed at Indonesia Stock Exchange Sub-Sector Telecommunication Period 2014–2019
This study aims to compare the best bankruptcy prediction models between Altman, Springate, Zmijewski and Grover models against companies listed on the Indonesian stock exchange in the telecommunications sub-sector for the 2014-2019 period. The purposive sampling method is used to obtain a sample of companies with the following criteria: Companies listed on the Indonesian stock exchange, the telecommunications sub-sector, the company has conducted an IPO in 2010, the company is obedient in reporting annual reports from 2014 - 2019 and the company is free from delisting issues. There are 4 companies that meet the purposive sampling criteria, namely PT. Telkom TBK, PT. Indosat TBK. PT. XL Axiata TBK and PT. Smartfren TBK. The data used in this research is secondary panel data. The results showed that only PT. Telkom which is in a healthy financial condition. Meanwhile, PT. Indosat, PT. XL Axiata and PT. Smartfren is consistently in an unhealthy condition based on the analysis of the Altman and Springate models. The calculation of Zmijewski's model and Grover's model gave inconsistent results. Comparative testing of the four bankruptcy analysis models resulted in the Altman, Springate and Grover models recording accurate results but Altman modelling is the best because it is an accurate, consistent, and tested model both descriptively and statistically.
Introduction and Background
At the beginning of 2020, the Indonesian telecommunications society was shocked by the emergence of confirmed news where PT. Indosat. TBK (ISAT) scheduled a layoff plan (PHK) with more than 500 employees affected.
In an announcement to employees on Friday (14/02/2020), which was reported by CNBC Indonesia, President Director & CEO of Indosat Ooredoo Ahmad Al-Neama conveyed three vital changes to Indosat Ooredo's business, namely: 1. Strengthening regional teams so that work is more efficient and is transferred to closeness to customers. 2. Transfer of resources to third parties in the form of such as managed services or back-to-back business. 3. Organizational adjustments, especially in the field of human resources, in order to improve future competitiveness to improve customer satisfaction.
Furthermore, his party believes that this step will improve Indosat's performance, help the company remain competitive amid the challenges of disruption, optimize services, and provide a better experience for customers. "This is a strategic step in making Indosat Ooredo a leading trusted digital telecommunications company," (Asmara, 2020). Sourcing from the publication manuscript of the SDPPI Research and Development team (2018), that based on 2018 BPS data, it can be seen that the telecommunications service sector provides the largest share of GDP in the Information and Communication sector compared to other sectors, with an increasing contribution value in rupiah. However, it seen from the trend, the share of telecommunication services to Information and Communication GDP has decreased. In 2010, the share of the telecommunications services sector reached 76.53%, and experienced a downward trend until 2017. This shows that the growth of telecommunications in Indonesia continues to decline. (Heppy, 2018) When viewed from the number of users, the development of the Indonesian cellular telecommunications industry is starting to experience saturation: it can be seen from the teledensity of cellular subscribers which reached more than 140% in 2017. The current trend in telecommunications technology has shifted from voice and SMS to data traffic, and has had an impact on revenue growth. operator slowed down even more. In Indonesia, the telecommunications market (voice and data) is contested by several cellular operators, namely PT Hutchison 3 Indonesia (Tri), PT Indosat (Indosat), PT XL Axiata (XL), PT Sampoerna Telekomunikasi Indonesia (Ceria), PT Telekomunikasi Selular (Telkomsel), PT Smartfren,. The number of telecommunication operators is considered inefficient, taking into account the proportion of the market share of each operator, where 90% of the market share is in the 3 largest cellular operators (Heppy, 2018).
We observed from the Net Profit Margin, the results achieved by telecommunication companies Telkom, Indosat, XL Axiata and Smartfren are reflected in the following figure. According to Mulyana and Nugroho (2018), Net Profit Margin (NPM) can be interpreted as a measurement of how much net profit is created from every one rupiah of sales achieved. High net profit margin illustrates good efficiency in the company and better management of costs. Conversely, low net profit indicates inefficiencies in cost management that could have been avoided.
PT. Telkom can show consistently positive performance against the background where PT. Telkom is the only state-owned company in a company listed on the Indonesia Stock Exchange in the telecommunications subsector and is involved in the cellular sector through its subsidiary, PT. Telkomsel with a share of 65%.
PT. Telkom, which was founded on July 6, 1965, has a telecommunications network that stretches from Sabang to Merauke and has a source of funding from the government (BUMN), resulting in a well-established network infrastructure and human resources. This is what fundamentally makes the difference between PT. Telkom and other telecommunication companies, namely efficiency and synergy in the use of Capex (Capital Expenditure) and Opex (Operational Expenditure) on infrastructure where PT. Telkom can use the infrastructure that has been built since 1965 while other telecommunication companies have to build new infrastructure or by leasing which of course becomes operational expenses. Ariyanti (2016) states that based on data from Puslitbang SDPPI in 2015, PT. Telkomsel requires funds of ± 500 million rupiahs for the implementation of 1 e-nodeB where there is a possibility that other telecommunication companies require higher costs because they do not have a strong infrastructure as strong as PT. Telkom to create a new network or lease to a network infrastructure provider.
On the other hand, PT. Telkom also has several subsidiaries that are engaged in various fields, ranging from the telecommunications, property, infrastructure, and goods and services sectors that can maximize potential revenue in the event of a decline in the telecommunications sector (investment diversification).
Figure 3: Average NPM and WC Telco
From Figure 3, it can be seen in general that the average net profit margin of telecommunication companies in Indonesia has decreased with an increase occurring in 2019 but it turns out that the average working capital of telecommunication companies has decreased to a level below minus 11 trillion rupiahs, which means that in the operation of telecommunication companies Indonesia relies heavily on short-term debt due to poor profitability. With this downward trend, it becomes a question whether telecommunications companies in Indonesia experience financial distress and whether these companies can survive in the future.
Theory and Hypotheses Development
This research uses the grand theory of financial distress with the Bankruptcy Prediction Model (BPM) which uses the Altman model analysis, Springate model analysis, Zmijewski model analysis and Grover model analysis.
Middle theory to cover this research is the theory of corporate financial management (Corporate Financial Management) and Applied Theory in this research, especially in Working Capital, Total Asset, Total Sales and EBIT (Earnings Before Interest and Tax) to then get the calculation results of each bankruptcy prediction model analysis.
Financial Distress
Financial distress is a condition of a company that is experiencing problems in providing its financial needs and can be said to be in an unhealthy financial condition. An example is the difficulty in paying debts or difficulties in financing its operational activities.
Platt & Platt (2002) describe the condition of a company's financial difficulties as a stage of decline in financial conditions that occur before liquidation or bankruptcy occurs. The condition of the company's financial difficulties usually starts with difficulties in fulfilling the company's obligations, especially short-term liabilities, which include liquidity obligations, and also include obligations in the solvency category (solvable).
Bankruptcy Prediction Model
Prihadi (2016) states that bankruptcy can be interpreted as a company's failure to operate, manage, and earn profits. Bankruptcy can be described as a situation where a company is unable to settle its short, medium and long term obligations. Copeland and Weston (1988) explain the definition of bankruptcy as described in their book Financial Theory and Corporate Policy, namely: 1. Economic Failure What is meant by economic failure is if the company's income is still insufficient and unable to cover all company expenses.
Financial Failure
Financial failure is defined as insolvency, which is a financial condition that cannot be resolved, for example, failure to meet the performance requirements and company benchmarks that have been set, for example, the criteria for the current asset ratio (Quick Ratio) and the current debt ratio (Debt to Asset). Ratio).
From some of the explanations above, it can be concluded that the definition of bankruptcy is a condition in which the company is in a bad financial position and experiences difficulties in carrying out company operations which can result in the company losing business competition and consequently will experience decreased revenue and profits and ultimately cannot. achieve company goals. Arini and Triyonowati (2013) explain that there are four indicator variables that can be used to observe a company's potential level of bankruptcy, namely (1) the level of company profits, (2) effectiveness and efficiency of debt, (3) efficiency of fixed and operational costs and (4) stock returns.
Investors are one of the parties who need bankruptcy predictions from a company because before investors make investment, financial data analysis is needed about how the company's prospects in the future and how it has achieved in the past, so that it can ensure that the investments of investors and investors are on the same path. expected, that is to provide appropriate benefits and returns. This is also in line with the opinion of Wulandari, Burhanuddin and Widayanti (2017) that the level of bankruptcy will have a bad impact and precedent not only on internal companies but also on external parties, in this case investors are also included. Amanah, Atmanto and Azizah (2014) explain in their research where the level of investor interest in the company is directly proportional to the value and return of stock prices. Therefore, the company will always try to maximize and optimize the value of its shares so as to increase investor interest in the company.
Altman Z-Score revision 1995
In Z-Score revision 1995, Altman eliminates the X5 variable (sales to total assets) because this ratio is very varied in industries with different asset sizes. Zmijewski (1984) in his research uses ratio analysis that measures the performance, leverage, and liquidity of a company for his prediction model, Zmijewski also requires one crucial point. The proportions of the sample and the population must be determined from the beginning, to obtain the prediction frequency of the company's financial distress. This frequency is obtained by dividing the number of samples that have gone bankrupt by the total number of samples.
The sample used by Mark Zmijewski amounted to 840 companies, consisting of 40 companies that experienced bankruptcy, 800 companies that did not go bankrupt. Data obtained from the Compustat annual industrial file. Data were collected from 1972-1978. The statistical method used is the same as that used by Ohlson, namely logit regression. By using this method, Zmijewski produced the following model: Not Bankrupt
Grover
Grover's model (2001) is a model created by designing and reassessing the Altman Z-Score model. Jeffrey S. Grover used the sample according to the Altman Z-Score model in 1968, adding thirteen new financial ratios. G = 1,650X1 + 3,404X2 -0,016ROA + 0,057 X1 Working Capital / Total Asset X2 Earnings Before Interest and Taxes / Total Asset ROA Return on Asset G Grover G-Score Not Bankrupt
3. Research Framework
This study aims to determine whether telecommunication companies in Indonesia are in good condition or on the contrary, are in a bad condition. For companies that are in bad shape, with the earliest possible detection of potential bankruptcies, managers and top leaders can react to the situation and take strategic steps to restore the situation and improve company performance so that the company can improve and get out of the bankruptcy zone.
As for companies that are in good condition, they can maintain their condition and continuously evaluate their company's performance to ensure that the company's goals remain at the desired rate and can map future needs in the company's winning strategy, in the case of current telecommunications companies, namely the implementation of 5G. and data monetization to further increase company revenue. This study compares the four bankruptcy prediction models, namely the Altman, Springate, Zmijewski and Grover models. To ensure the consistency of the results of the four analysis models and compares the level of accuracy to obtain the model with the highest accuracy for telecommunications companies in Indonesia listed on the Indonesia Stock Exchange over a period of time. 2014-2019.
Based on this, a conceptual framework is made as follows:
Altman's Model Hypothesis
Research by Fifriani and Santosa (2019) and Nandini et al (2018) states that the revised Altman model can be used to predict the possibility that telecommunications companies are in a healthy state or vice versa in an unfavourable state or bankruptcy. H1: The revised Altman model can be used to predict the financial distress of telecommunications companies listed on the Indonesia Stock Exchange for the period 2014-2019
The Springate Model Hypothesis
Research by Ben, Dzulkirom and Topowijono (2015) states that the Springate model can be used to predict company bankruptcy.
H2: The Springate model can be used to predict the financial distress of telecommunications companies listed on the Indonesia Stock Exchange for the period 2014-2019.
Zmijewski's Model Hypothesis
Research by Manalu, Octavianus and Safarina (2017) and Salim (2016) states that the Zmijewski model can be used to predict company bankruptcy.
Research Methodology
The type of research chosen by the researcher is an ex-facto quantitative approach in which data is collected from various sources to be processed and analysed. The ex-facto method was chosen because the writer used factual data that had already occurred and was audited, and the writer did not have control over the independent variables.
Population and Sample
According to Sugiyono (2012) population is a whole collection of elements that are similar but different because of their characteristics. The population in this study are all companies listed on the Indonesia Stock Exchange, the telecommunications subsector for the period 2014-2019. The sample in Sugiyono (2012) explanation is part of the elements of a population. The sample is part of the population or representative of the population which is seen as representative of the object under study.
The criteria for selecting samples in the object of this study are as follows: 1. Companies listed on the Indonesia Stock Exchange in the telecommunications subsector. 2. Telecommunication sub-sector companies that have IPO in 2010.
3. The company is obedient to reporting annual reports for the 2014-2019 period. 4. The company is free from issues and potential delisting.
Based on the sample selection criteria above, what can be used as a research sample are as follows:
Types of Data
Basically, research is an activity of collecting information or data that is very useful to find out something, solve problems, or to develop knowledge. Information that has been scaled is called data.
The type of data used in this study is the type of panel data where the data taken is data from 2 or more telecommunication subsector companies listed on the Indonesia Stock Exchange for the period 2014-2019.
Data Collection Techniques
Secondary data is data that comes from research results or other people's presentations which are made for different purposes. Secondary data can also be interpreted as additional data that is not obtained first-hand but second, third or so on.
Because documents are usually written by third parties, such as journalists or screenwriters who are not scientific informants. The data used in the document is of course not first-hand.
Apart from being collected by the owner, secondary data is also collected by other parties, for example banking financial data. The data is of course stored and owned by each bank, but there are other parties that keep the data, for example Bank Indonesia or the Financial Services Authority. Even private institutions such as Infobank and Bisnis Indonesia maintain the data because it is an important commodity.
This study uses secondary data, namely data available on the official website of the Indonesia Stock Exchange (www.idx.go.id) and those on the official pages of each issuer.
Descriptive Analysis
The technical descriptive analysis in this research can be described as follows: 1. Performing analysis calculations on the research sample using the Altman, Springate, Zmijewski and Grover models. 2. Classifying the calculation results in the first point. The classification is divided into 2, namely companies in a healthy and unhealthy condition. The possibility of classification confusion can arise because the Altman method has three possibilities, namely (1) Healthy, (2) Gray area and (3) Unhealthy. Based on research from Suwitno (2013), it is stated that 93% of companies located in the Gray area do not go bankrupt in the following year. Therefore, we categorize the gray area in an unhealthy condition. 3. Validate the analysis results with the actual conditions of the company.
Classical Assumption Test and Hypothesis
The panel data regression model can be said to be a good model if it meets the Best Linear Unbiased Factor (BLUE) criteria. BLUE can be achieved if it fulfils the classical assumptions. The classical assumption tests include normality test, autocorrelation test, multicollinearity test, and heteroscedasticity test. Arikunto (2013) states that not all classical tests must be done because: 1. Linearity test is hardly performed on every linear regression model. Because it has been assumed that the model is linear. Even if it is only done to ascertain and see the degree of linearity. 2. The normality test is basically not a BLUE requirement and some opinions do not require this requirement as something that must be met. 3. Autocorrelation only occurs in time series data. Autocorrelation testing on data that is not timeseries (crosssection or panel data) will be meaningless. 4. Multicollinearity needs to be done when linear regression uses one or more independent variables. If there is only one independent variable, multicollinearity is impossible. 5. Heteroscedasticity usually occurs in cross-section data, where panel data is closer to the characteristics of cross section data than to time series data.
Comparison of Descriptive Analysis Results with Statistical Analysis
After the results of descriptive and statistical calculations are obtained, comparisons are made to ensure whether the results obtained in the study are consistent between the results of descriptive analysis and statistical analysis which shows that the modelling can be applied to companies listed on the Indonesia Stock Exchange in the Telecommunications subsector for the period 2014 -2019.
Measurement Accuracy of Bankruptcy Prediction Models
In measuring the accuracy of the bankruptcy prediction model, we compare the results of the prediction model calculations to be compared with the company's Debt to Asset ratio.
The choice of the Debt to Asset ratio is because the DAR ratio is one of the ratios that includes the Solvency ratio which assesses whether a company can meet all its obligations or debts using its assets. The higher the DAR ratio, the higher the probability that the company is in a bad financial condition.
Ross (2019) states that from a risk point of view, DAR with a value below 40% or 0.4 is preferred. And because debt is always correlated with interest rates that are not related to profitability, too much debt can consume cash flow, which in turn leads the company to sell assets to pay debts or declare bankruptcy.
Sean Ross (2019) also said that investors prefer companies with DAR in the range of reference 0.3 (30%) to 0.6 (60%). With the reference from Ross (2019), we use the reference standard of a more conservative DAR ratio of 50% in the hope that the company's financial difficulties can be identified as early as possible and corrective steps can be taken quickly to avoid the possibility of bankruptcy.
Selection of the Best Bankruptcy Prediction Model
A bankruptcy prediction model that is consistent in its calculations both statistically and descriptively is then chosen which has the best level of accuracy. Which will then be further analysed descriptively with its relationship to NPM, Working Capital or DAR to get the best prediction results that can be used to predict financial distress conditions in companies listed on the Indonesia Stock Exchange in the Telecommunications subsector for the period 2014 -2019.
Altman
Alman model analytic result can be seen as follow: consistently produces a good Z-Score, is in 2014-2018 and in 2019 it is in the grey zone. For the other three companies, namely PT. Indosat, PT. XL Axiata and PT. Smartfren is consistently in the bankruptcy zone, which should become a concern for the management and top leaders of the company.
Springate
Springate model analytic result can be seen as follow: For the other three companies, namely PT. Indosat, PT. Xl Axiata and PT. Smartfren is consistently in the bankruptcy zone, which should be a concern for the management and top leaders of the company.
Zmijewski
Zmijewski analytic model result can be seen as follow: From table-5 above, we can observe that of the four companies included in the sample, only PT. Telkom which consistently produces X-Score in the not bankrupt category.
For the other three companies, namely PT. Indosat, PT. XL Axiata and PT. Smartfren is fluctuating in a bankruptcy zone which is of course a vigilance for the management and top leaders of the company.
The calculation anomaly is obtained for PT. XL Axiata in the 2018 period, which recorded a negative X1 and X2 which was in the 66% position, which means that it has a debt of 66% of the assets resulting in Zmijewski being in a good position or not bankrupt.
The anomaly is then obtained from calculations for PT. Smartfren where the value of X1 / ROA recorded a negative value throughout the observation period, but the results of the calculation of the Zmijewski model produced in 2015, 2017-2019 are in the category of not bankrupt.
With this anomaly, it is concluded that Zmijewski's analysis model calculations are not suitable for application to telecommunications companies in Indonesia.
Grover
Grover model analytic result can be seen as follow: From table-6 above, above, we can observe that of the four companies included in the sample, only PT. Telkom which consistently produces G-Score in the not bankrupt category.
For the other three companies, namely PT. Indosat, PT. Xl Axiata and PT. Smartfren is consistently in the bankruptcy zone, which is a concern for the management and top leaders of the company.
Classic Assumption and Hypothesis Testing
The classical assumption test is used as a preparation for conducting statistical tests in this study, where in this research will make comparisons to the analysed model using two methods, namely statistical methods and descriptive methods. The following classical statistical assumption tests use SPSS software as one of the tested software for use in statistical analysis in scientific research.
A. Normality Test
The normality test in this study used the Shapiro Wilk method which was obtained through SPSS software. The variables used as input for the normality test process come from the calculation results of telecommunications companies listed on the Indonesia Stock Exchange for the period 2014-2019 for each analysis model, namely the Altman, Springate, Zmijewski and Grover models. So that we get 96 input data.
The results of the calculation of the Shapiro Wilk normality test are as follows: In the normality test, it is stated that good data is data that is normally distributed, where data that is normally distributed usually has a significance (Sig)> 0.05. In table-7 above, we can observe that there is only 1 data that is below 0.05, namely the Sig value of PT. Smartfren for Altman analysis. However, because the resulting value is the actual calculation result, the research is continued.
We also examines more about the significance value of PT. Smartfren below 0.05, when we tries to ignore the value calculated by PT. Telkom, then the significance value of PT. Smartfren becomes above 0.05. From this result, we conlcude that the statistical data assumes anomalies occur in the PT. Smartfren for Altman analysis due to the variations that are too far when compared with the calculation results of PT. Telkom.
B. Multicollinearity Test
The multicollinearity test is intended to determine whether there is multiple collinearities in the research results. The parameter used is if the tolerance is above 0.10 and the VIF is below 10, then the data is declared good and can be used in statistical regression research.
The results of statistical calculations for the multicollinearity test in this study are presented as follows: and Smartfren are all below 0.10 and the VIF value from the BPS calculation results is not greater than 10 so that the regression analysis can be used and can be proceed further.
C. Heteroscedasticity Test
In heteroscedastic testing, we used the Glejser method, which obtained the following results: From table-9 above we can observe that the significance value for the variable calculation analysis of the Altman, Springate, Zmijewski and Grover models for Indosat, Telkom, XL Axiata and Smartfren has a significance value above 0.05 so that the independent variable is heteroscedasticity and can be continued for regression calculations furthermore.
D. Independent Variable Test (t-test)
In performing the T test, the T Table value for this study was found to be 2.09302 which will then be compared with the results of the SPSS calculation as follows:
i. Altman Hypothesis Statistical Testing
In the T test value of the Altman variable, it was found that the significance of 0.020 was less than 0.05 and the t value of 2.528 was greater than the t table value so that the H1 hypothesis was accepted, namely "Revised Altman Model can be used to predict the financial distress conditions of telecommunications companies listed on the Indonesia Stock Exchange for the period 2014-2019".
ii. Springate Hypothesis Statistical Testing
In the T test value of the Springate variable it was found that the significance was 0.034 which was smaller than the significance limit of 0.05 and the t value of 2.283 was greater than the t table value so that the H2 hypothesis was accepted, namely "The Springate Model can be used to predict the financial condition of the company's distress. telecommunications listed on the Indonesia Stock Exchange for the period 2014-2019".
iii. Zmijewski Hypothesis Statistical Testing
In the T test value of the Zmijewski variable, it was found that the significance of 0.854 was greater than the significance limit of 0.05 and the t value of 0.187 was smaller than the t table value so that the hypothesis H3 was rejected so that it became "Zmijweski's model cannot be used to predict financial distress. telecommunications companies listed on the Indonesia Stock Exchange for the period 2014-2019".
iv. Grover Hypothesis Statistical Testing
In the T-test value of the Grover variable, it was found that the significance of 0.065 which is greater than the significance limit of 0.05 and the t value of -1.956 which is smaller than the t table value causes the H4 hypothesis to be rejected so that it becomes "Grover's model cannot be used to predict financial conditions. distress of telecommunication companies listed on the Indonesia Stock Exchange for the period 2014-2019".
v. The Best Hypothesis Statistical Testing is based on the Comparison of the T test
In table-10, we can see that the results of the Altman model analysis have a t value of 2.528 which is higher than the other models so that H5 is accepted, namely "There will be one model that is the most accurate among the Altman model, the Springate model, the Zmijewski model and the Grover model. predict the bankruptcy of telecommunication companies listed on the Indonesia Stock Exchange for the period 2014-2019".
E. Overall Significance test (f-test )
In the F test analysis, the F table value of 2.87 is obtained which will then be compared with the results of the SPSS to determine whether these independent variables have a simultaneous effect on company performance.
The F test analysis table through SPSS can be seen as follows:
Comparative Analytic between Descriptive and Statistical result
In the analysis of the hypothesis statistical testing, we found that: a) The Altman model can be used to predict the company's financial difficulties at telecommunications companies in Indonesia.
b) The Springate model can be used to predict the financial difficulties of companies in telecommunications companies in Indonesia. c) The Zmijewski model cannot be used to predict the financial distress of companies in telecommunications companies in Indonesia. d) The Grover Model cannot be used to predict the financial difficulties of companies in telecommunications companies in Indonesia. e) Altman is the best model to use in predicting the financial difficulties of telecommunications companies in Indonesia.
Accuracy of Measurements
In measuring the accuracy or accuracy of the company's bankruptcy prediction model compared to the binary condition, namely if DAR > 50% is in a bad financial condition and if DAR < 50% then the company is in good financial condition, we get results that can be seen in the table as follows: In table-12 we can see that only the Zmijewski model has a low accuracy of 58% and this is in accordance with the descriptive and statistical research results in the discussion of the previous sub-chapter which states that the Zmijewski model may not be suitable if used in predicting the company's financial difficulties.
Best Bankruptcy Prediction Model
From the four bankruptcy prediction models, based on the analysis results in previous chapter, it is concluded that Altman is the best bankruptcy prediction model in predicting the company's financial distress with the highest ttest value.
Descriptive analysis also shows that Altman is the best prediction model because in addition to being consistent and accurate, Altman's prediction model is the only prediction model that gives Telkom a gray zone rating, which is indicated by a negative working capital value. and a fairly high decline in the value of working capital in 2019. In Table-13 The decrease in working capital to the lowest level is certainly a warning to the top management of PT. Telkom so as not to follow the trend of other telecommunication companies (PT. Indosat, PT. XL Axiata and Smartfren) which had negative working capital during the observation period.
Management Implications
Net Profit Margin, which is the profitability ratio, shows how much profit you get from sales. Setiawati and Lim (2016) state that profitability affects firm value. The higher the level of profitability, the higher the company's ability to increase sustainable growth and increase share prices that will attract investors. High Net Profit Margin also allows companies to improve the welfare of their employees through bonus distribution or salary increases. Morshed (2020) argues that working capital greatly influences the company's profitability. Negative working capital means that the company in carrying out its operational activities relies on short-term debt. And the profit obtained from these operational results will certainly be reduced due to having to pay interest on short-term debt.
A negative net profit margin value accompanied by negative working capital will worsen company performance because there is no profit that can be used to pay current debts.
Some of the management implications that arise including: 1. Negative and low Net Profit Margin results in the scale value of the Altman and Springate models which are in bad financial conditions, which can be used as a reference by the management of PT. Indosat, PT. XL Axiata and PT. Smarfren in doing short-and long-term planning. Companies must focus on getting a good net profit margin (~ 20% according to observations on PT Telkom) and sustainable so that they can develop positively without being burdened by ineffective operating costs and short-term and long-term obligations. Rational steps that can be taken include streamlining the organization, selling assets, emphasizing the principle of prudence in expansion, investment and increasing sales effectiveness. 2. Consistent research results through the Altman and Springate models to PT. Indosat, PT. XL Axiata and PT.
Smartfren (in an unfavourable financial condition) must be immediately followed up with concrete steps so that these companies can immediately rise up and achieve positive results. Without concrete steps and taking steps to rationalize it, it is not impossible for PT. Indosat, PT. XL Axiata and PT. Smartfren will follow in the footsteps of PT. Bakrie Telekom is currently in a "suspended animation" condition. 3. Management of PT. Telkom must immediately respond to the drastic reduction in working capital in 2019 (which resulted in PT. Telkom entering the grey zone of the Altman model) with tangible, efficient and sustainable steps so as not to follow the pattern of other telecommunications companies, namely negative working capital throughout the observation year. of course, be the first step in the company's financial difficulties.
Conclusion
Based on the research results, it is concluded that of the four telecommunications companies listed on the Indonesian stock exchange for the 2014-2019 period, only PT. Telkom which is in a healthy financial condition. PT. Indosat, PT. XL Axiata and PT. Smartfren, all three are consistently in an unhealthy condition based on the results of the analysis of the Altman and Springate models.
The calculation of Zmijewski's model and Grover's model gave inconsistent results. When comparisons were made between the four bankruptcy analysis models, Altman, Sprigate and Grover recorded accurate results, but we only recommend the analysis results from Altman because it is an accurate, consistent, and tested model both descriptively and statistically.
The results of this study can certainly be of concern to the government where apart from PT. Telkom, which is a state-owned company, the 3rd telecommunications network provider including the private sector is in a bad condition.
If the private sector in telecommunications is reduced or lost, then the losers are the customers because with reduced competition there will be a monopoly on prices and services which results in the lack of choices provided by the telecommunications sector.
In addition, if the private sector is reduced, this will have an impact on the thousands or tens of thousands of employees who are part of the private sector.
PT. Bakrie Telecom as in Kokyung and Khairani's (2013) research can be a good benchmark for comparison, where Kokyung conducted research in 2013 and in 2020 PT. Bakrie Telecom is operationally gone and is waiting for delisting or liquidation.
Net Profit Margin, which is the profitability ratio, shows how much profit you get from sales. Setiawati and Lim (2016) state that profitability affects firm value. The higher the level of profitability, the higher the company's ability to increase sustainable growth and increase share prices that will attract investors. High Net Profit Margin also allows companies to improve the welfare of their employees through bonus distribution or salary increases.
Advice for Telecommunication Company Management in general.
Our research resulted in a negative and decreasing pattern of the company's working capital value. This indicates that in their operations, telecommunication companies in Indonesia depend on short-term loans. Negative working capital accompanied by a negative or small Net Profit Margin can cause a chain reaction that causes the company to be in a circle of bad financial conditions and difficulty in recovering, as in the proverb, dig a hole to close the hole.
In this case we suggest to carrying out effective and selective operational activities in terms of expansion, adding assets and in terms of planning, planning new products to be flexibly adapted to the circumstances.
Advice to Government
The government as a regulator is also obliged to actively encourage mutually beneficial competition between telecommunications companies and with customers, in this case the people.
According to our opinion, the government can do what is based on article 33 of the UUD45 (Indonesian Constitution) which states that the lives of many people are managed by the government, in this case the government may issue regulations on interconnection networks that can be utilized by PT. Indosat, PT. XL Axiata and PT. Smartfren effectively and encourages positive business growth.
Another thing is the adjustment of telecommunication upper and lower tariffs and the setting of internet tariffs whereas we know, 4G services encourage large data usage and it seems that this has not been felt by PT. Indosat, PT. XL Axiata and PT. Smartfren which is depicted from the results of the bankruptcy analysis that is in unfavourable condition.
Suggestions for Future Researchers
In 2020, research methodologies using machine learning and artificial intelligent can open up the possibility of research methods that are more modern, comprehensive and can be used across industries.
The limitation of the author at this time is that we only use the Multivariant Discriminant Analysis (MDA) model such as Altman, Springate, Zmijewski and Grover with a span of only 6 years.
In line with Qu, Quan, Minglong & Shi (2020), we suggest that further researchers conduct research on comparisons between the MDA model (Altman and Springate) with the Deep Belief Network (DBN) and Convolutional Neural Network (CNN) models with a longer period. and make predictions for the next 5 years, so that a prediction model that is relevant for now and with the right expectations can be used not only to predict the condition of the company's financial difficulties, but also to be able to find out the true "disease" in the company so that " the drugs" used are effective and can lead the company to recover and become a company that has good finances and continues to grow. | 8,729 | 2021-02-03T00:00:00.000 | [
"Business",
"Economics"
] |
Association between TGFB1 genetic polymorphisms and chronic allograft dysfunction: a systematic review and meta-analysis
Background Epidemiological studies have investigated the role of transforming growth factor-β1 (TGF-β1) in chronic allograft dysfunction (CAD) following kidney transplantation. TGFB1 gene polymorphisms (SNP rs1800470 and rs1800471) may be associated with the risk of CAD. In this meta-analysis, the relationship between these two variations and the risk of CAD was explored. Materials and Methods MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), Embase, the Chinese CNKI and WANFANG databases were searched. Data were extracted and pooled results were estimated from odds ratios (ORs) with 95% confidential intervals (95% CIs). Quality assessments were performed, and publication bias of all eligible studies examined. Results Eight studies with 1038 subjects were included in our analysis. According to the effects on TGF-β1 secretion, haplotypes were categorized as “HIGH”, “INTERMEDIATE” and “LOW”. The combined results showed a statistically significant difference of TGFB1 haplotypes between the CAD recipients and control subjects when “HIGH” with “INTERMEDIATE” and “LOW” (“HIGH” vs. “INTERMEDIATE” + “LOW”: OR: 3.56, 95% CIs: 2.20, 5.78, P < 0.001) were compared. No significant association was found between the TGFB1 codon 10 or codon 25 and the CAD risk in all five genetic models. Conclusions Our meta-analysis has found the haplotype of TGFB1 codon 10/25 T/T G/G and T/C G/G genotypes, associated with increased production of TGF-β1, was linked with CAD risk following kidney transplantation. Moreover, no significant difference was found between TGFB1 codon 10 or codon 25 and the development of CAD.
INTRODUCTION
Kidney transplantation is the optimal therapy for end-stage renal disease [1]. Short-term allograft survival has been significantly improved due to advancements in immuosuppressive agents and surgical techniques [2]. However, long-term allograft survival, especially in the period of 10 to 20 years after kidney transplantation, has not accommodated the increase of short-term survival.
Numerous studies have shown that chronic allograft dysfunction (CAD) is the main cause of graft failure [3,4].
TGF-β1 is a multifunctional growth cytokine in humans and one of the significant mediators of wound healing and tissue regeneration [12]. TGFB1 is mapped on to chromosome 19q13.1-13.3 with seven exons and six introns, and the expression of TGFB1 and production of TGF-β1 is related to single nucleotide polymorphisms (SNPs). TGFB1 contains two SNPs, +869T/C at codon 10 (rs1800470) and +915G/C at codon 25 (rs1800471) contributing to variation in TGF-β1 production both in vitro and in serum [13]. There is increasing evidence that the level of production of a series of cytokines can be modulated by polymorphisms in the corresponding genes [14]. Since fibrosis-related cytokines play an important role in the inflammatory and immune response that mediate the outcome of kidney transplantation, the relationship between those cytokine polymorphisms, particularly TGFB1 SNPs, and CAD has been explored in several studies [15][16][17]. However, the results of these studies were inconsistent and often conflicting.
Based on the crucial role of TGF-β1 in CAD pathogenesis, we performed meta-analysis to investigate the contributions of two TGFB1 SNPs to CAD risk.
Search strategy
A comprehensive literature search was performed in PubMed, the Cochrane Central Register of Controlled Trials (CENTRAL), Embase, the Chinese CNKI and WANFANG databases (updated on December 20st 2016) by two independent authors (Kun L and Xuzhong L). The following keywords were used: (transforming growth factor-beta 1 OR TGFB1), AND (polymorphisms OR SNPs OR variants), AND (chronic rejection OR chronic allograft nephropathy OR chronic allograft dysfunction), AND (MESH item, kidney transplantation). The equivalent Chinese terms were used in the Chinese databases. Furthermore, the reference lists of all studies included in the meta-analysis were also reviewed for possible inclusion.
Inclusion and exclusion criteria
Studies were included if they met the following eligibility criteria: (1) case-control studies designed to investigate the relationship between TGFB1 SNPs and chronic allograft dysfunction after kidney transplantation; (2) available information on the frequencies of genotype or allele in case and control groups; (3) all subjects from three allelic groups were derived from a population within the same geographic area and ethnic background; (4) full-text article were published in English or Chinese. The exclusion criteria were: (1) Case reports; (2) reviews; (3) animal experiment, chemistry, or cell line studies; (4) studies in a language other than English or Chinese; (5). No eligible or insufficient data frequencies of genotypes for each polymorphism could be extracted for meta-analysis. Two authors (Kun L and Xuzhong L) independently assessed and selected trials for final analysis with discrepancies resolved by consensus.
Data extraction
Two investigators (Kun L and Xuzhong L) independently extracted relevant data from all selected studies and reached consensus for all items. Basic characteristics of patients were collected as following: first author's name, year of publication, nation/race, number of subjects, the proportion of male subjects, mean age and method of genotype. In addition, the genotype and allele frequencies of TGFB1 SNPs were collected using a standardized data extraction form. Missing data was examined by contacting the first or corresponding author.
Quality assessment
The methodological quality of each included study was assessed by the Newcastle-Ottawa quality assessment scale (NOS) (18). Each study was evaluated on the standard criteria and categorized based on three factors, including selection, comparability and exposure. Scores ranged from 0 to 9; 9 points represented the highest quality and lowest risk of bias.
Statistical analysis
The pooled data was used to assess the strength of the association between TGFB1 polymorphisms and CAD using odds ratio (OR) with 95% confidence intervals (95% CIs) in a dominant model, a recessive model, a co-dominant model, a co-recessive model and an allele model. A p value less than 0.05 was considered statistically significant. Heterogeneity test among trials was determined by I 2 , and was defined as 100% *(Q-df )/Q, where Q is Cochran's heterogeneity statistic and df is the degrees of freedom, with a fixed-effect model set at low statistical inconsistency (I 2 < 25%). Otherwise, a random-effects model was selected, which is better adapted to clinical and statistical variations [18][19]. To explore the potential effects of heterogeneity, stratification analysis by ethnicity, age and quality criteria was carried out. The Egger's regression test and funnel plots were used to assess potential publication bias. Cumulative meta-analysis was carried out by the year of publication. All analyses were performed using STATA (release 12.0, College Station, TX).
Literature selection and study characteristics
A flow diagram of the screening process for included studies is shown in Figure 1. Primary screening identified 18 potentially relevant articles, including 15 articles in English and 3 in Chinese. Based on their titles and abstracts, 8 articles were excluded due to unrelated subject or no available data. After screening the full articles, a total of 8 trials with 1038 subjects met the criteria and were included in our meta-analysis.
Quantitative synthesis
A total of 8 trials were included in the analysis of association between TGFB1 polymorphisms and CAD. For TGFB1 codon 10 T/C, no significant difference was found between CAD recipients and control subjects in all five models (TT vs. TC+CC: OR: 1.37, 95% CIs: 0. Figure 2). Furthermore, we performed the meta-analysis to investigate the effect of TGFB1 haplotype on the pathogenesis of CAD. According to the effects on TGF-β1 secretion, these haplotypes were categorized as "HIGH", "INTERMEDIATE" and "LOW" (Supplementary Table 1) [16]. In summary, we observed statistically significant differences in TGFB1 haplotypes between the CAD recipients and control subjects when comparing the "HIGH" with "INTERMEDIATE" and "LOW" ("HIGH" vs. "INTERMEDIATE" + "LOW": OR: 3.56, 95% CIs: 2.20, 5.78, P < 0.001, Figure 2).
Sensitivity analysis and publication bias
Sensitivity analysis was performed by sequential removal of individual studies to reflect the effect of individual data on the pooled ORs. No effect of any study was found on the pooled results in the above five models. The Begg's funnel plot and Egger's linear regression test was performed to determine publication bias. Results of funnel plot and regression test showed that there was no significant publication bias (Table 3).
DISCUSSION
The present meta-analysis included 8 studies with 1038 renal transplant recipients to assess the correlation between two TGFB1 SNPs, codon 10 and 25, and CAD risk. In addition, this meta-analysis presented the first cumulative meta-analysis on this topic. In the cumulative meta-analysis, we found that "HIGH" haplotype, which contains codon 10, 25 T/T G/G and T/C G/G genotypes, was strongly associated with the development of CAD following kidney transplantation. However, no significant association of TGFB1 SNPs in all five genetic models with CAD risk was observed.
TGF-β1 is a multifunctional cytokine that regulates the proliferation and differentiation of many cell types, and has been identified as an important promoter of fibrogenesis in various cells and tissues [26]. There is increasing evidence that TGF-β1 is strongly associated with the pathogenesis of interstitial fibrosis in kidney through a physical process, including EMT and EndMT [27]. The TGFB1 SNPs +869T/C at codon 10 and +915G/C at codon 25 are important in the signal sequence and may influence the secretion of TGF-β1, contributing to the occurrence of CAD [13]. In our study, we found that the haplotype of TGFB1 codon 10, 25 T/T G/G and T/C G/G genotypes, associated with higher production of TGF-β1, are more susceptible to CAD following kidney transplantation when compared with haplotypes of other genotypes. This is consistent with the results of studies conducted by P N. Nikolova [17].
The TGFB1 codon 10 and codon 25 were associated with inter-individual variation in the level of TGF-β1 production. Dunning et al. (13) found that secretion of the proline form of TBG-β1 was 2.8 times that of the leucine form of TGF-β1 at codon 10 in transfected Hela cells, therefore the proline mutation at codon 10 could increase the amount of TGF-β1 protein secretion. Similarly, Cambien et al. [28] believed that the substitution of arginine with proline corresponded to a change from a large polar acid to a small apolar acid and concluded that this may affect the export of TGF-β1 protein. In our study, no significant association of TGFB1 codon 10 or codon 25 with the CAD risk was found in all five models, suggesting that the single mutation in codon 10 and codon 25 were not responsible for the pathogenesis of CAD after kidney transplantation.
Notably, these results should be interpreted with caution. The case number of recipients with CAD included in our analysis is relatively small, which could lead to relatively high heterogeneity. Due to the limited number of studies in our analysis, we categorized CAN, confirmed by allograft renal biopsy, into the diagnosis of CAD, instead of subgroup analysis. Finally, there were at least 10 SNPs reported in the TGFB1 gene, and except for codon 10 and codon 25, rare studies have focused on other SNPs, which may play a role in the pathogenesis of CAD.
In conclusion, our meta-analysis found the haplotype of TGFB1 gene codon 10/25 T/T G/G and T/C G/G genotypes, associated with increased production of TGF-β1, was linked with CAD risk following kidney transplantation. Moreover, no significant difference was found between TGFB1 codon 10 or codon 25 and the development of CAD. Further studies incorporating subjects with difference ethnic backgrounds combined with re-sequencing of the marked region and functional evaluations are warranted.
ACKNOWLEDGMENTS AND FUNDING
We would like to thank the native English speaking scientists of Elixigen Company (Huntington Beach, California) for editing our manuscript. | 2,633.4 | 2017-07-24T00:00:00.000 | [
"Medicine",
"Biology"
] |
Unraveling the Effects of Task Sequencing on the Syntactic Complexity, Accuracy, Lexical Complexity, and Fluency of L2 Written Production
Unraveling the Effects of Task Sequencing on the Syntactic Complexity, Accuracy,
Background Task Complexity and Task Sequencing: Framing Theoretical Perspectives
Over recent decades, many task complexity studies have been driven by two robust and competing models: Skehan's (1998Skehan's ( , 2001 Limited Attentional Capacity (LAC) Model and Robinson's (2001Robinson's ( , 2003 Cognition Hypothesis which make different predictions about the cognitive operations and attentional resources affecting L2 development. The LAC Model, grounded in psycholinguistic theories of first language (L1) acquisition, conceptualizes the relationship between cognitive and attentional resources during L2 processing (Skehan, 1998). This model predicts that increasing task complexity reduces cognitive capacity for monitoring linguistic form because complexity, accuracy, and fluency (CAF) compete intensely for the same attentional resources. Therefore, learners allocate attentional resources to CAF such that one of these constructs (e.g., accuracy) benefits at the expense of others (e.g., complexity and fluency), resulting in a tradeoff effect due to reduced cognitive capacity for monitoring formal aspects of the task (Abrams, 2019). The LAC model proposes that tasks should balance and distribute learner attention so that no CAF elements are neglected during L2 development. Conversely, Robinson's (2001Robinson's ( , 2003 Cognition Hypothesis (CH), grounded in functional/cognitive linguistics, takes an alternate view of learners' cognitive abilities, arguing that learners possess multiple, rather than limited, attentional resources that do not compete. This forms the basis for the central claim of the CH which states that, since different CAF elements belong to different attentional resources, complexity and accuracy can be attended to concurrently with possible decays in fluency.
To provide a more comprehensive classification for determining task complexity, Robinson (2007) expanded on the CH with the Triadic Componential Framework (TCF), recently renamed the SSARC Model (Robinson, 2010). The TCF classifies task characteristics into three categories: task conditions, task difficulty, and task complexity. Task conditions represent interactional factors influencing the type and quantity of interactions required in a task, while task difficulty describes individual abilities and affective factors learners bring to task performance. Task complexity refers to task features that can be manipulated to challenge learners' cognitive resources and is further divided into resource-dispersing and resource-directing factors (see Table 1). Resource-directing factors direct learner attention to language needed for task completion, while resourcedispersing variables make procedural and performative cognitive demands. Robinson (2001Robinson ( , 2003 stipulates that simple tasks yield greater fluency as resource-dispersing variables become more complex, while further complexification along resource-directing variables yields greater accuracy. Such complexification increases linguistic production and pushes learners to adjust and expand their interlanguage but decreases fluency.
Table 1
Task Variables in Robinson's Triadic Componential Framework (Adapted from Robinson & Gilabert, 2007, p. 164) Several studies have investigated the effects of task complexity on learners' CAF by manipulating resource-directing and resource-dispersing factors. Relevant to the present research are studies examining the effect of raising task complexity along the resourcedirecting variable of number of elements (± elements) and the resource-dispersing variable of pre-task planning time (± planning) on L2 written production.
Expanding on earlier theories of task complexity, Robinson's (2010) SSARC model distinguishes which task variables should be manipulated during task sequencing, favouring cognitive factors over interactional and learner factors. This comprises the first principle in the SSARC model: Tasks should be sequenced only according to cognitive complexity, operationalized as resource-directing and resource-dispersing variables, while other variables remain constant. The second task sequencing principle states that task complexity should first be increased along resource-dispersing variables, followed by resource-directing variables (Robinson, 2010).
The SSARC model illustrates the rationale for these two principles. In Step 1, Stabilize and Simplify, learners complete simple tasks, engaging their current interlanguage. In Step 2, Automatize, complexity is increased along resource-dispersing variables to encourage quicker access to learners' interlanguage. In Step 3, Restructure and Complexify, complexity is raised along both resource-dispersing and resource-directing variables so that learners' interlanguage systems are destabilized and restructured, thus promoting more complex interlanguage (Robinson, 2010). Therefore, the SSARC model offers clear pedagogical implications for task-based syllabus design by proposing a predetermined sequence that promotes interlanguage development, as shifts in task complexity induce gradual shifts in interlanguage throughout the task sequence.
Review of Empirical Studies: Effects of Task Sequencing on L2 Production
To test Robinson's SSARC Model, several studies investigating the effect of task complexity and sequencing on L2 development have been conducted on L2 oral and written production by raising task complexity along resource-directing and/or resourcedispersing variables over different sequencing orders.
Studies testing the SSARC model by manipulating only resource-directing variables have reported a range of findings. Levkina and Gilabert (2014) tested the effect of different sequences (simple-complex, complex-simple, and randomized) complexified along two variables (± spatial reasoning; ± perspective-taking) on learners' retention of spatial expressions over time. Immediate post-tests revealed that complex-to-simple sequencing led to greater short-term retention of spatial expressions, while delayed posttests showed that simple-complex sequencing resulted in greater long-term retention. Baralt (2014) examined the effects of four simple-complex sequences (SSC, SCS, CSC, CCS) on L2 oral and written production by raising complexity along ± reasoning demands. Results showed that sequences with more complex tasks (CCS and CSC) generated increased learning opportunities and greater L2 development. Malicka (2014Malicka ( , 2018 tested the effects of different sequences (simple-to-complex, randomized, or individual tasks) on L2 oral production by raising complexity along ± reasoning demands and ± few elements. Both of her studies reported that simple-to-complex task sequencing led to improved speech rate, accuracy, and structural complexity. While the studies above indicated that simple-to-complex sequencing is effective, the SSARC model was not fully tested since only resource-directing variables were manipulated.
Only two studies have examined the effect of task sequencing by manipulating both resource-directing and resource-dispersing factors according to the SSARC model's proposed order. Lambert and Robinson (2014) explored the effects of simple-complex and randomized task sequencing on L2 written production, modifying both resource-directing (± few elements; ± reasoning demands) and resource-dispersing variables (± planning; ± prior knowledge; ± number of steps; ± multi-tasking). Findings indicated the simple-tocomplex task sequence led to greater overall long-term benefits. More recently, Allaw and McDonough (2019) tested the effect of simple-complex versus complex-simple sequences on L2 written production by manipulating both resource-directing (± spatial reasoning) and resource-dispersing variables (± task structure). While both sequences yielded increased lexical diversity, grammatical accuracy, and fluency, the simple-complex sequence led to greater overall performance and long-term improvement. This provides strong support for Robinson's SSARC model, as the model's proposed simple-complex sequence yielded the greatest performance gains.
The Present Study
The current study was motivated by two gaps identified in the TBLT literature: First, several studies have explored the effects of task sequencing on L2 oral production (Malicka, 2014(Malicka, , 2018, acquisition (Baralt, 2014), and interaction (Kim & Payant, 2014), but the limited number of studies investigating the impacts of task sequencing on L2 writing yielded contradictory results (Allaw & McDonough, 2019;Lambert & Robinson, 2014, Levkina & Gilabert, 2014. Second, while some TBLT studies have explored task sequencing by manipulating either resource-directing or resource-dispersing factors, only two task sequencing studies have simultaneously manipulated both of these task complexity factors as proposed in the SSARC model (Allaw & McDonough, 2019;Lambert & Robinson, 2014), producing different results. Thus, further exploration is warranted to provide additional empirical evidence for the SSARC model. To address the aforementioned gaps, the current study seeks to answer the following question: What is the effect of simple ̶ complex task sequencing on the syntactic complexity, accuracy, lexical complexity, and fluency of L2 written production when compared to individual task performance?
Participants and Context
This study included a sample of 90 undergraduate learners (46 women and 44 men) who enrolled in a fifteen-week, advanced multilingual writing course at a large university in the United States. They were recruited from five different classes taught by two different instructors using a communicative-based syllabus (Ellis, 2003) and employing similar teaching materials. Participants were aged between 19-21 years old (M = 20.6, SD = 8.61) and had lived in English-speaking countries for three years (M = 2.6, SD = .8). The participants came from a variety of L1 backgrounds. Half of the participants were Hispanic (N = 45) while the rest were from China (N = 8), South Korea (N = 7), France (N = 6), , Holland (N = 2), and Nigeria (N = 1). They had learned English as an L2 for 9 to 12 years (M = 10.3, SD = 4.28) at the time of data collection. Participants received nearly 39 hours of formal classroom teaching before data collection and all had upper-intermediate proficiency levels, based on their TOEFL-iBT scores in the 17-23 range and performance on a writing placement test administered by the university annually. After signing informed consent forms, participants were randomly divided into two groups to either perform written tasks in a simple ̶ complex sequence or complete the same tasks individually.
Procedure
Prior to starting the main experiment, a pilot study was conducted to determine the time needed for performing different versions of a written decision-making task and validate the assumptions about task complexity manipulations before the main experiment. Thus, 30 participants first performed a simple decision-making task with 10 minutes of planning time, then the same task without planning time, and finally a complex version of the task without planning time. After performing each task, the participants completed the task complexity questionnaire to validate the task complexity manipulations. The time each participant spent on different versions of the task was recorded by the researcher, for which descriptive statistics are presented in Table 2. As shown in Table 2, there was a gradual pattern of increasing lengths of time spent on the written tasks as cognitive complexity increased, such that the simple task with planning time was the shortest task to perform, followed by the simple task without planning, and the complex task without planning time as the longest. To ensure that pretask planning was properly operationalized, the average amount of time spent by participants to perform the tasks was used as the time limit in the main writing experiment.
After conducting the pilot study, the researcher visited regularly scheduled multilingual writing classes and randomly assigned participants to two groups to conduct the main experiment: 1. the simple ̶ complex sequencing group (n = 30), and 2. the individual task group (n = 60). Following the SSARC model of task sequencing and the times set in the pilot study, the first group performed the written tasks in the simple-to-complex order with 5-minute intervals; participants performed the simple task in 17.2 minutes with 10 minutes of planning time, then the same task in 23.8 minutes without planning time, and finally the complex one in 30.2 minutes without planning time. Similar to Malicka's (2018) study, 60 participants in the individual task group were subdivided into three groups of equal size (n = 20); each subgroup completed only one task under the same task conditions as in the sequencing group. Thus, the difference between the two groups is that in the simple ̶ complex group, all participants performed the tasks successively, while in the individual task group, participants in each subgroup completed only one task at a pre-determined cognitive complexity level.
Writing Tasks
In line with the principles of the SSARC model, different versions of a decisionmaking task varying in terms of inherent cognitive complexity were created and sequenced in the simple ̶ complex order. The versions of the writing task, manipulated along resourcedirecting and resource-dispersing factors, were different in terms of cognitive complexity by decreasing or increasing the number of task elements (a resource-directing factor) and providing or removing pre-task planning time (a resource-dispersing factor). The decisionmaking task included specific descriptions of different job candidates who applied for a software engineering position at a well-known company; the number of candidates varied in different versions of the written task. Table 3 summarizes the sequenced writing task in light of the SSARC model. In Stage I, participants were given four job candidates' application dossiers, and based on the information given, had to decide which two candidates would be the most qualified for the company's software engineering position. Before writing, they had 10 minutes of planning time to inspect each candidate's application dossiers and write some planning notes, allowing participants to prepare for the task performance and focus on content, language, and organization. More importantly, the provision of planning time was hypothesized to mitigate the online processing load during writing and free up attentional resources for focusing on different aspects of the task; furthermore, the task consisted of only four elements placing a lower demand on their working memory. The combination of a simple version of the task and planning time was intended to simplify input corresponding to the first stage in the SSARC model.
In Stage II, the number of task elements remained unchanged and participants again decided which two out of four candidates would be top-tier candidates for the software engineering position. Following the second stage in the SSARC model, in which tasks should be cognitively demanding on resource-dispersing factors (Robinson, 2010), planning time was removed to increase the cognitive complexity on this version of the task, in order to improve automatization of the writing process. Thus, participants had to produce similar ideas as in the simple task, while engaging in planning and writing at the same time. Compared to the simpler task, this task was considered more cognitively demanding since participants wrote without planning time.
In Stage III, participants were given six job candidates' application dossiers and had to decide which two would be the most qualified candidates for the company's open position. The increase in the number of elements along with the removal of planning time bolstered the complexity of the writing task such that participants had to simultaneously process and analyze six candidates, then plan and organize their arguments before finally expressing their new ideas. This increase in cognitive complexity corresponds to stage three in the SSARC model postulating that tasks should be cognitively demanding along both resource-directing and resource-dispersing factors, connecting learners to novel linguistic forms, and pushing them to complexify their interlanguage (Robinson, 2010).
Validation of Cognitive Task Complexity Assumptions
TBLT researchers have called for examining the validity of task complexity assumptions to confirm that tasks designed to be cognitively complex actually result in varying levels of cognitive complexity for learners (Norris, 2010;Révész, 2014). In response, numerous studies have used various subjective and objective techniques to verify the assumptions about the impacts of task manipulations on task complexity, as summarized in Table 4. Révész, Kourtali, & Mazgutova (2017) Before the main experiment, a pilot study with 30 students and 10 teachers was conducted to validate the speculated differences in cognitive complexity levels between the written tasks via two different analytical, subjective techniques: learner self-ratings and expert judgments. To examine learner perceptions and attitudes about the perceived difficulty, stress, and cognitive load of the tasks, a nine-point Likert-scale questionnaire adapted from Lee's (2018) original questionnaire was used to elicit participants' responses to 11 items representing different categories. In addition, all participants were requested to answer four open-ended questions: 1. Did you notice any difference between performance of the writing task with the provision of planning time versus no provision of planning time? 2. Which one was more difficult to perform and why? 3. In your opinion, was there any difference between the two writing tasks in terms of difficulty? 4. Which of the two versions of the written task was more difficult to perform and why? As presented in Table 5, the descriptive results demonstrated a steady increase in participants' self-ratings of perceived difficulty, mental effort, and stress as cognitive complexity intensified. In line with the predictions made, the overall ratings for the simple versions of the task were lower than those for the complex task. In addition, the simple task with planning time obtained lower means than the version without planning time in terms of the three affective variables. Regarding participants' responses to the follow-up questions, one student wrote, The preparation time before writing was really accommodating and beneficial because it helped me gather my thoughts create a clear outline and write with more comfort and confidence. But I had a hard time performing the second task without planning time and it became much harder while doing the third task including more job candidates. Compared to the third task, the second one was less difficult because I already carried it out and had some familiarity with the task performance. To see how experts judge the cognitive load of tasks, 10 university instructors with considerable experience teaching L2 writing courses were asked to rate the difficulty of tasks on a Likert scale originally adapted from Robinson's (2001) questionnaire. They were also provided with open-ended questions regarding the overall perceived difficulty and mental effort required to complete the tasks. Their responses to the questionnaire items were analyzed by one rater and the open-ended responses were examined by two raters. The results are presented in Table 6. As expected, students' and instructors' ratings were consistent: the complex task was rated higher than the simple task, and the simple task with no planning time was rated higher than the simple task with planning time. One expert wrote, In my opinion, the final task was the difficult one because it would require learners to engage in processing and analyzing in more depth. In addition, they would have to choose two out of six candidates with strong applications and support their choices with evidence. However, I would perceive the first task aligned with planning time as the least difficult task because learners can prepare their written plans beforehand and use them while writing. The same version of the task without planning is the medium task which inevitably requires more mental effort and time to process and perform due to the absence of planning time.
CALF Measures
Following Norris and Ortega (2009), who argue for assessing syntactic complexity multidimensionally while warning against using redundant measures which result in a multicollinearity effect, four different measures were utilized to gauge different subconstructs of complexity: 1. the mean length of T-unit (MLT) as a general measure of syntactic complexity calculated by dividing the total number of words by the total number of T-units in a text, 2. the ratio of dependent clauses to T-units (DC/T) as a measure of subordination complexity gauged by the ratio of dependent clauses to T-units in a text, 3. the number of complex nominals per T-unit (CN/T) analyzed by dividing the total number of complex nominals by the total number of clauses, and 4. the number of complex nominals per clause (CN/C) computed by the total number of complex nominals by the total number of T-units (Lu, 2010). The last two measures (CN/T and CN/C) were chosen to examine phrasal complexity in participants' written production. In our analysis, T-unit was used rather than C-unit or AS-unit because the nature of written tasks was monologic (Foster, Tonkyn, & Wigglesworth, 2000). It is defined as "one main clause plus any subordinate clause or non-clausal structure that is attached to or embedded in it" (Hunt, 1970, p. 4).
In line with Ellis and Yuan (2004), the accuracy of L2 written production was examined by calculating the ratio of error-free clauses to the total number of clauses and the ratio of correct verb forms to the total number of verbs used in each text. To calculate error-free clauses, participants' written production was divided into clauses and lexical, morphological, and syntactic errors were identified and marked. Any unmarked clause was considered error-free. For each participant, the proportion of error-free clauses was regarded as the resulting score. Given that the error-free clauses metric is a holistic measure of accuracy, correct verb forms were used as a specific measure. For each participant, the proportion of verbs free of tense, aspect, modality, or agreement errors was used as a score of analysis.
Lexical complexity was gauged with respect to diversity and sophistication. The measure of textual lexical diversity (MLTD) was used by computing "the mean length of sequential word strings in a text that maintain a given TTR value" (Mazgutova & Kormos, 2015, p. 5). MLTD was used rather than other vocabulary diversity metrics, specifically mean segmental type-token ratio (MSTTR), because it is least affected by text length (Mazgutova & Kormos, 2015;McCarthy & Jarvis, 2010) and therefore is considered a more reliable indicator of vocabulary. Additionally, there is a growing demand for using lexical sophistication metrics in measuring L2 written production (Johnson, 2017). In response, the log frequency of content words was utilized to examine the lexical sophistication of texts and was calculated by the ratio of the log frequency to content words in the CELEX database (McNamara et al., 2014). There were two main reasons for the selection of the log frequency of content words: 1. compared to frequency band measures, the log frequency of content words can better represent large and small improvements in the participants' written production due to inclusion of the frequency counts from a large corpus (Kyle & Crossley, 2015); and 2. this metric measures lexical sophistication with a higher degree of reliability in comparison to the raw frequency of content words (Kormos, 2011). Finally, fluency was calculated by counting the number of words produced within a set time (Abrams, 2019). This product-based measure was selected for two reasons: 1. it has ecological validity such that teachers can use it in curriculum-based assessment, and 2. it allows comparability of the results with the findings of past studies.
Statistical Analysis
In the analyses of written performances, we used nine different measures to examine the effects of task sequencing as an independent variable on different constructs of L2 written production as dependent variables. First, means and standard deviations were calculated for all the variables in the different groups. Then, data sets were directly imported into R version 3.6.1 (R Development Core Team, 2019) to check the normality assumption through normal Q-Q plots and the Kolmogorov-Smirnov test. Given that six out of nine variables were associated with a violation of the normality assumption, nonparametric statistics (i.e., Mann-Whitney U test) were used to answer the research question. The Mann-Whitney U test was performed to detect significant comparisons between the simple ̶ complex sequencing group and the individual task group. The level of significance for this study was set at an alpha level of 0.05. For the Mann-Whitney U test, Cohen's d was employed to measure effect sizes. Following Plonsky and Oswald's (2014) benchmarks, d values of .40, .70 and 1.00 were considered as small, medium, and large, respectively.
Effects on Syntactic Complexity
The descriptive statistics for the measures of general complexity (MLT), subordination complexity (DC/T), and phrasal complexity (CN/T and CN/C) are presented for the simple ̶ complex sequencing group and the individual task group in Figure 1. The inferential statistics revealed significant differences in performance between the two groups with respect to MLT in the simple task with planning time, MLT and DC/T in the simple task without planning time, and DC/T in the complex task. The participants in the simple ̶ complex sequencing group produced more complex structures than their counterparts in the individual task group on two simple tasks with and without planning (simple task + planning, p = .000 and simple task ̶ planning, p = .000). In both cases, the d scores indicated large effect sizes corroborating a substantial difference between the two groups' performance in the case of MLT. Similar results were also found between the simple ̶ complex sequencing group and the individual task group in terms of complex subordinations in the simple task without planning time (p = .000) and the complex task (p = .000) with large effect sizes (d = 1.08 and 1.03, respectively). Table 7 displays the performance of the simple ̶ complex sequencing group as opposed to the individual task group for complexity subconstructs.
Effects on Accuracy
The descriptive statistics revealed that the simple ̶ complex sequencing group yielded higher mean scores than the individual task group regarding the two accuracy measures in the complex task, but the opposite was found for the simple task without planning time. In the case of the simple task with planning time, the simple ̶ complex sequencing group produced a higher mean score than the individual task group on correct verb forms; however, the latter had a higher mean score on error-free clauses (see Figure 2). The results from the Mann-Whitney U test revealed that, whereas the individual task group outscored the simple ̶ complex sequencing group regarding error-free clauses and correct verb forms in the simple tasks with and without planning time, the comparisons did not reach statistical significance. The d scores showed small effect sizes between the two groups' performance in the simple task with planning time (d = .2 for EFCs and d = .17 for CVFs) and in the simple task without planning time (d = .21 for EFCs and d = .12 for CVFs). Nonetheless, the proportion of error-free clauses and correct verb forms in the simple ̶ complex sequencing group was higher than in the individual task group in the complex task and the comparisons were statistically significant with large effect sizes (p = .000, d = .95 in the case of EFCs and p = .001, d = .86 in the case of CVFs). The inferential statistics are summarized in Table 8.
Figure 2 Mean Scores for EFCs and CVFs
Note. IND = individual task group, SCS = simple ̶ complex sequencing group
Effects on Lexical Complexity
The descriptive statistics showed the simple ̶ complex sequencing group produced higher mean scores than the individual task group with respect to lexical diversity (MTLD) and sophistication (WRDFRQmc) in the three writing tasks (see Figure 3). The comparisons between the two groups' performance reached statistical significance only in the case of MTLD with a medium effect size in the simple task (p = .004, d = .60). In addition, significant comparisons were found between the simple ̶ complex sequencing group and the individual task group regarding MTLD and WRDFRQmc in the complex task. The d scores indicated a medium effect size for the simple ̶ complex sequencing group compared to the individual task group (d = .66 for MTLD and d = .63 for WRDFRQmc). Table 9 presents the inferential statistics for two measures of lexical complexity.
Figure 3 Mean Scores for MTLD and WRDFRQmc
Note. IND = individual task group, SCS = simple ̶ complex sequencing group
Effects on Fluency
The descriptive statistics showed that participants in the simple ̶ complex sequencing produced more words within the designated time limit in the simple task with planning time and in the complex task when compared to the individual task group. Nevertheless, the latter generated more words in the simple task without planning time than the former (see Figure 4). As can be observed in Table 10, the results revealed that the comparison between the two groups was statistically significant only in the case of the simple task with planning time (p = .001) and this was reflected by a large effect size (d = .87). However, no comparisons between the two groups were found to be statistically significant with respect to fluency in the simple task without planning time (p = .350) and in the complex task (p = .320). Overall, these results revealed that the simple ̶ complex sequencing group produced greater syntactic complexity at the general level (simple tasks with and without planning time) and at the subordination level (simple task without planning and complex task); wrote more error-free clauses and correct verb forms (complex task); exhibited more diverse and sophisticated vocabulary (simple task with planning and complex task); and finally, wrote faster and generated more words within a set time (simple task with planning time) as compared to the individual task group.
Discussion
The major thrust of this study was to explore the effects of task sequencing on multilingual learners' written production in accordance with the SSARC model. Given that past studies tested the role of the SSARC model in L2 oral production (Baralt, 2014;Malicka, 2014Malicka, , 2018, finding that the simple-to-complex order led to significant gains in L2 production over the short or long term, the current study examined the effectiveness of the SSARC model in L2 written production using a simple-complex task design. The results demonstrated that simple-complex task sequencing favoured syntactic and lexical complexity, promoted accuracy, and assisted fluency, providing empirical evidence supporting the theoretical claim of the SSARC model regarding the beneficial role of simple-complex task sequencing for L2 written production. With respect to syntactic complexity, whereas production of complex structures gradually decreased in both the simple-complex sequencing and individual performance groups over the sequence, the former produced both a significantly higher mean length of T-unit in the sequence's first two tasks, characterizing general complexity, and significantly more subordination, recognized as the most indicative source of complexification at the intermediate level, in the last two tasks in the sequence. Compared to the individual task group, the simple-complex sequencing group also produced more complex nominal structures throughout the sequence, manifesting greater phrasal-level complexification as a result of increased cognitive complexity, although not at a statistically significant level. These results partially corroborate Malicka (2014Malicka ( , 2018 which also found that simple-complex sequencing increased syntactic complexity at the clausal level, but conversely reported decreases in subordination. Furthermore, the results revealed that in the simple task with planning time, only general syntactic complexity was significantly fostered in the simple-complex sequencing group. As reported previously (Abdi Tabari, 2020;Ellis & Yuan, 2004;Farahani & Meraji, 2011;Rostamian et al., 2018), pre-task planning provided learners with the opportunity to plan, linguistically encode their messages, and produce more complex writing. Increases in general complexity and complex subordinations can also be explained by the SSARC model, postulating that removing pre-task planning time and increasing cognitive complexity induced learners to exhibit higher levels of syntactic complexity because their exposure to the simple task created more scaffolded opportunities for rehearsal, serving as a preparatory mechanism to complexify their writing at a deeper level (Malicka, 2014). Nevertheless, increases in the number of elements did not prompt learners to stretch their syntactic resources in the complex task to produce significantly more complex structures at the phrasal complexity level, possibly due to their proficiency level. Overall, these results partially support the Cognition Hypothesis (Robinson, 2003) which predicts that increasing task complexity along resource-directing factors can push learners to extend their existing L2 repertoire to meet task demands.
Regarding L2 writing accuracy, the results revealed that on both measures for this construct, correctness of linguistic forms steadily decreased in both groups' performance, with the simple task with planning generating the most accurate forms. Nevertheless, significant differences were observed only in the complex task resulting in increased accuracy in the simple-complex group. These results suggest that when tasks are performed in the simple-to-complex order, learners can produce more correct clauses and verbs and improve the accuracy of their written production as compared to tasks performed in isolation. These results echo findings of past oral-and written-performance task sequencing studies (Allaw & McDonough, 2019;Malicka, 2014Malicka, , 2018 reporting increases in grammatical accuracy under the simple ̶ complex task condition. Following the SSARC model, the simple task with planning time may provide learners with the opportunity to direct more attention to linguistic forms, activate their monitoring behaviour, and rehearse target-like structures. In the simple task without planning time and the complex task, they can recall those structures, be cognizant of problematic linguistic areas, and avoid errors although pre-task planning time is removed. Consequently, less deviation from target-like structures should occur on the less complex and complex versions of the task. These results support Robinson's Cognition Hypothesis which postulates that complex tasks can lead to more accurate language if complexification occurs along resource-directing factors. However, it is imperative to mention that the results of this study only provide some preliminary evidence confirming the prediction of dual increases in complexity and accuracy. Concerning lexical complexity, results showed that lexically diverse items gradually increased in both groups' performance and the complex task triggered the highest lexical diversity. Regarding lexical sophistication, a different pattern was found for the two groups such that the simple ̶ complex sequencing group produced the most sophisticated vocabulary in the complex task, but the individual task group generated the most sophisticated lexical items in the simple task with planning. These results support previous findings that increased task complexity induces greater lexical diversity in both oral and written performance (Abdi Tabari, 2020;Frear & Bitchener, 2015;Kuiken & Vedder, 2007, 2008Levkina & Gilabert, 2012;Ong & Zhang, 2010;Rahimi, 2018), as well as Allaw and McDonough's (2019) findings, providing further confirmation of the SSARC model's claim that increasing cognitive complexity in the simple ̶ complex sequence induces greater lexical complexity. The simple task helped learners focus on simplified input and stabilize recently learned lexical items, while also creating prerequisite conditions for the subsequent complexification of their lexical production. The less complex task, removing pre-task planning, encouraged greater independence in using a wider range of lexical items, as well as fostering consolidation and automatization. Finally, the complex task, increasing the number of elements, aided the learners in complexifying their lexical production over the simple-to-complex sequencing order.
Regarding L2 writing fluency, both groups generated the highest number of words at the fastest speed in the simple task with planning time. Notably, under the individual task performance condition, the proportion of words produced and writing fluency decreased steadily as a result of increasing cognitive complexity, while task performance under the simple ̶ complex condition displayed a U-shaped pattern in writing fluency, demonstrating the multidimensional, rather than strictly linear, nature of written production even over a short-term sequence (Malicka, 2014). Furthermore, the participants produced more words and demonstrated greater fluency in the simple-to-complex order than in isolation. Increased fluency in the simple task with planning corroborate the findings of previous studies (Farahani & Meraji, 2011;Levkina & Gilabert, 2012;Rahimi, 2018;Rostamian et al., 2018) and support Robinson's (2005) claim that the resource-directing factor's role (e.g., planning time) is to facilitate production under time pressure, assist access to learners' existing L2 knowledge, promote automaticity of the interlanguage system, and build fluency. Moreover, these results confirm the prediction of the Cognition Hypothesis that simpler tasks will improve production fluency (Baralt, Gilabert, & Robinson, 2014). Additionally, it is necessary to compare our results with the findings of Allaw and McDonough's (2019) study to better understand the short-and long-term effects of task sequencing on L2 writing fluency. While our results reveal a non-linear pattern regarding writing fluency over a short-term sequence, Allaw and McDonough (2019) reported that writing fluency is promoted as the result of increasing cognitive task complexity in the long term. Although the two studies manipulated tasks along resourcedirecting and resource-dispersing factors and employed the same measure to gauge L2 writing fluency, they differ from each other in terms of (1) the type of written task, (2) the types of resource-directing and resource-dispersing factors manipulated, (3) the explicit instructions participants received, and (4) the time intervals between the simple, less complex, and complex tasks. These differences can possibly account for diverging results between the two studies.
Implications and Future Directions
The current study offers theoretical and pedagogical implications for TBLT researchers and L2 teachers. Theoretically, our findings partially support the predictions of the Cognition Hypothesis and the SSARC model, revealing the progressive and variable nature of the effects of task sequencing when manipulated along resource-directing and resource-dispersing factors on a short-term simple-to-complex sequence. Increases in task complexity along the resource-directing factor induced learners to surpass their L2 knowledge base to meet the demands of the task, thus increasing the complexity of L2 written production. Conversely, the resource-dispersing factors facilitated access to existing L2 knowledge, promoting automatization and fluency. To help learners achieve more balanced CALF performance in writing, increasing these factors alone would not necessarily extend learners' L2 knowledge base and increase task complexity unless these factors are specifically integrated into simpler tasks in the sequence. It should be noted only a shorter sequence was tested and more research is warranted to examine how specific combinations of resource-directing and resource-dispersing factors could be integrated into a longer sequence to be used for a classroom syllabus. In line with the SSARC model, our findings also suggest that simple tasks designed for stabilization and complex tasks aimed at automatization should be implemented before subsequent restructuring and complexification can occur. This simple-to-complex sequence can provide learners with more scaffolded opportunities to practice and consolidate newly learned lexical and grammatical structures in their writing, promoting complexification of their production. However, this claim still requires more fine-grained evidence to flesh out the proposals of the SSARC model in the long term and disclose what happens after the processes of restructuring and complexification have occurred.
Pedagogically, the SSARC model associated with the Cognition Hypothesis offers several useful implications for classroom practice and syllabus design. Following the SSARC model, teachers can design lesson plans including a set of tasks with varying levels of cognitive complexity and provide multiple practice opportunities for learners to perform written tasks while benefiting from extra preparatory options such as pre-task planning time. At the pre-task stage, L2 learners with varying writing proficiency levels can experience less pressure and gain mental and linguistic preparedness for subsequent tasks due to scaffolding opportunities, improving focus on the content of their writing and increasing their speed of production. In the following stage, teachers can prompt learners to repeat the same task to facilitate greater autonomy, gradually increasing their independent performance as extra preparation is removed. Thus, learners may be more involved in rehearsal, the extension of their L2 repertoire, and development of complex structures, gaining prerequisite readiness for performing more sophisticated written tasks. In the posttask stage of writing, teachers can design more sophisticated resource-directing tasks, which stretch learners' linguistic resources, encourage them to face the challenges of performing these complex tasks, increase their ability to take risks, and push their interlanguage to its limits to use more innovative language. Teachers can also adjust the complexity level of tasks by manipulating different resource-directing and resourcedispersing factors; however, such task sequencing decisions should be made based on learners' needs, considering their background knowledge, proficiency levels, motivation, and readiness. In addition, teachers are cautioned against designing a task sequence with different cognitive complexity levels in a vacuum. They should ensure that tasks with different instructional demands match relevant areas of their syllabus and promote learners' existing L2 knowledge over time. Finally, teachers with limited teaching experience may struggle to understand and interpret research-supported ideas, manipulate and sequence tasks, and regularly and effectively implement them in classroom settings. To solve this problem, teachers should receive ongoing training, support, and hands-on practice to understand the principles of the Triadic Componential Framework and the SSARC model, make sound task-sequencing decisions, and effectively operationalize classroom tasks with some degree of confidence.
This study has several limitations which require more consideration for future studies. As task sequencing was operationalized along both resource-directing and resource-dispersing factors, increasing the number of elements or the length of pre-task planning time could lead to different task sequencing effects on the CALF of L2 written production. Additionally, this study examined the effects of two task complexity factors (± number of task elements and ± planning time) on L2 writing. Future studies can extend this line of research by investigating the effects of manipulating other resource-directing and resource-dispersing factors on L2 written production. Furthermore, only subjective techniques were employed to verify the assumptions regarding the impacts of task manipulations on task complexity. Future studies could use objective techniques such as eye-tracking or dual task methodology to validate task difficulty assumptions in the L2 writing context. Finally, this study recruited participants at the upper-intermediate proficiency level, but more significant results could be obtained if advanced-level participants are examined to discover any relationships between proficiency and task sequencing effects. | 9,340.8 | 2021-01-01T00:00:00.000 | [
"Linguistics",
"Education"
] |
Expression of vascular endothelial growth factor (VEGF) in equine sarcoid
Background Sarcoids are the mostcommon skin tumors in horses, characterized by rare regression, invasiveness and high recurrence following surgical intervention and Delta Papillomaviruses are widely recognized as the causative agents of the disease. In order to gain new insights into equine sarcoid development, we have evaluated, in 25 equine sarcoids, by immunohistochemistry and western blotting analysis, the expression levels of VEGF, Ki67 and bcl-2. Moreover, we have measured microvessel density and specific vessel parameters. Results All sarcoid samples showed a strong and finely granular cytoplasmatic staining for VEGF in the majority (90%) of keratinocytes, sarcoid fibroblasts and endothelial cells. Numerous small blood vessels, immunostained with Von Willebrand factor, often appeared irregular in shape and without a distinct lumen, with mean values of microvessel area and perimeter lower than normal. Moreover, in all sarcoid samples, Ki67 immunoreactivity was moderately positive in 5–10% of dermal sarcoid fibroblasts, while Bcl2 immunoreactivity was detected in 52% of the sarcoid samples, with a weak staining in 20–50% of dermal sarcoid fibroblasts. Biochemical analysis was consistent with immunohistochemical results. Conclusions This study has provided evidence that in equine sarcoid: VEGF was strongly expressed; the increased number of vessels was not associated with their complete maturation, probably leading to a hypoxic condition, which could increase VEGF synthesis; the levels of sarcoid fibroblasts proliferation were very low. Concluding, VEGF may have a role in equine sarcoid development, not only through the increase of angiogenesis, but also through the control of sarcoid fibroblast activity.
Background
Sarcoids are the most common skin tumors in horses [1][2][3], with prevalence rates ranging from 12.9 to 67% of all equine tumors [4,5]. They appear as benign fibroblastic skin tumors, and are characterized by rare regression, invasiveness and high recurrence following surgery [6][7][8]. Delta Papillomaviruses (Bovine Papillomavirus 1, Bovine Papillomavirus 2 and Bovine Papillomavirus 13) [9][10][11] are involved in the pathogenesis of this tumor, mainly through the biological activity of the E5 oncogene. It has also been reported that BPV alters DNA methylation status and oxidative stress parameters [12,13].
Moreover, equine sarcoid may be considered one of the main long-term complications in the wound healing process of the horse [5,14], consequent to abnormal fibroblast proliferation and changes in dynamics of the extracellular matrix (ECM) and its main components [15]. Altered turnover of the ECM deposition and degradation, as result of an altered expression of matrix metalloproteinases (MMPs) and tissue inhibitor of metalloproteinases (TIMPs), were reported as basic mechanism for these changes [15]. ECM remodeling is strictly correlated to angiogenesis [16], as MMPs production is stimulated by many of the same factors (VEGF; tumor necrosis factor alpha; basic fibroblast growth factor) implicated in endothelial migration and formation of new capillaries (angiogenesis) [17]. VEGF, also known as vascular permeability factor (VPF), is a member of the platelet-derived growth factor (PDGF) family of growth factors, having a potent angiogenic activity and an important role in the modulation of ECM homeostasis and remodeling [18,19]. It is involved in numerous physiological and pathological processes, such as embryonic development [20], bone formation [21] wound healing [22] and cancer [23][24][25][26][27], in which it is up-regulated by oncogene expression, growth factors and hypoxia [28][29][30]. Moreover, it has been reported that Human Papillomavirus type 16 E6 and E7 oncoproteins may contribute to tumor angiogenesis by direct activation of the VEGF gene promoter in human lung and cervical carcinoma [31][32][33].
Although the general role of VEGF in angiogenic processes has been intensively studied [23][24][25][26][27], its specific function in sarcoid development has not been investigated, so far. In this regard, the aim of this study was to evaluate in a subset of equine sarcoids, the levels of VEGF expression in association with angiogenesis (evaluation of microvessel density and specific vessel parameters), and with Ki67 and bcl-2 expression.
Histological features
Haematoxylin and eosin staining of examined sarcoids (n = 25) showed the typical histological changes in their epidermal (when present) and dermal component, as reported by [15], such as: hyperkeratosis and epidermal hyperplasia often accompanied by rete pegs; dermal fibroblasts usually oriented perpendicular to the basal epidermal layer in a 'picket fence' pattern; abundant ECM; numerous small vessels irregular in shape.
Immunohistochemical and biochemical results
The expression patterns of VEGF, Ki67 and bcl-2 in 25 equine sarcoids and 5 normal skin samples are summarized in Table 1.
In all normal skin samples, weak granular cytoplasmatic VEGF immunopositivity was detected predominantly in the basal layer of normal epidermis (Fig. 1a), while normal fibroblasts were negative. Ki67 immunostaining was moderate and restricted to the basal layer of epidermis and hair follicle (Fig. 1b), while bcl-2 immunostaing was very weak and in 2/5 samples was negative (Fig. 1c). Moreover, in normal skin samples, blood vessels, immunostained with vWF, appeared regular in shape and size, with a distinct lumen (Fig. 1d).
Western blotting analysis was performed to check the specificity of the antibodies used throughout the study ( Fig. 3a-h). A band of the expected molecular size for VEGF (25 Kda), vWF (250 Kda), bcl-2 (26 Kda) and Ki67 (110 Kda) was identified in the tested samples as well as in Hela or Saos-2 cell lines used as positive control (Fig. 3a, c, e, f). The densitometric analysis of the bands normalized for β-actin revealed variable levels of VEGF protein among the samples, with higher expression in two sarcoid samples (Fig. 3b). Moreover, Von Willebrand factor protein was over-expressed in each sarcoid sample compared to each of normal skin lysates, in which the specific band was almost undetectable (Fig. 3d). The biochemical expression of bcl-2 and Ki67 was variable among the samples and no different trends between normal skin and tumor samples could be pointed out as confirmed by densitometric analysis and normalization for β-actin levels (Fig. 3f, h).
Discussion
VEGF is a potent angiogenic factor, produced by a variety of cell types, including keratinocytes, endothelial cells, macrophages, mast cells, fibroblasts [29,34,35] and is involved in several types of tumors [23][24][25][26][27], where it has been shown to influence both tumor neovascularization and dissemination [36,37]. Vascular endothelial growth factor can occur in at least six different isoforms, containing 121, 145, 165, 183, 189 and 206 amino acids. VEGF 121 is a freely diffusible isoform [38] and it has been proved to have a stronger angiogenic and tumorigenic activity when compared to the bigger isoforms [39][40][41].
In the present study, for the first time, we have demonstrated, by immunohistochemistry, the overexpression of VEGF (isoform 121) in equine sarcoid compared to normal skin, which was in part confirmed by biochemical analysis. It appeared as a strong and finely granular cytoplasmic staining pattern in the majority (> 90%) of keratinocytes, endothelial cells and sarcoid fibroblasts, suggesting a possible role in equine sarcoid development. Interestingly, in sarcoid samples we have also observed the presence of numerous small vessels, immunostained with vWF, which appeared often irregular in shape These findings were in agreement with other studies, which reported that tumor vasculature, formed under the influence of VEGF, is often structurally and functionally abnormal [42], probably as the results of the insufficient production or activation of other angiogenic factors, necessary for the formation of mature Fig. 1 Equine normal skin. Streptavidin-biotin-peroxidase stain. a weak granular cytoplasmatic VEGF immunostaining was detected in the basal layer of epidermis and hair follicle while normal fibroblasts were negative 20×; b Ki67 immunostaining was moderate and restricted to the basal layer of epidermis 40×; c bcl-2 immunostaining was very weak and restricted to the basal layer of epidermis 40×; d blood vessels, immunolabeled with vWF, appeared regular in shape, with a distinct lumen 20× and functioning new vessels [25,26]. These data seem to strongly support the hypothesis that in sarcoid, a suboptimal blood flow could lead to a deficient oxygen gradient within the tissue that exacerbates angiogenesis [43]. It is well known that tissue mild hypoxia, an almost universal condition within wound healing, from which sarcoid could origin, is an effective inducer of VEGF synthesis [44]. Therefore, in sarcoid, a vicious circle may occur in which hypoxia upregulates VEGF synthesis, leading to an insufficient vascularization, which in turn probably exacerbates hypoxia. Despite numerous studies characterizing its angiogenic property, data on the role of VEGF in ECM homeostasis and remodeling are still scanty.
For this reason, we speculate that, in equine sarcoid, VEGF could be implicated not only in angiogenesis but also in ECM homeostasis and remodeling, through a deregulation of fibroblasts proliferation and apoptosis in a possible hypoxic condition. This mechanism was also reported during endochondral bone formation, in which VEGF couples hypertrophic cartilage remodeling, ossification and angiogenesis [21]. In this regard, a very low fibroblast growth rate has been reported in equine sarcoid, along with a low frequency of p53 mutations [45]. These last were generally associated with abnormalities in apoptotic pathways rather than abnormal cell cycle control mechanism [46].
In our sarcoids samples, the percentage of Ki67 positive fibroblasts ranged from 5 to 10% and, in agreement with previous studies [46][47][48], these results confirm that the rate of fibroblast proliferation in equine sarcoid was very low. Moreover, these observations correspond to the clinical evidence that equine sarcoids are slow growing tumors and can remain quiescent for years [47]. IHC and biochemical data did not prove an overall differential expression of Ki67 and bcl-2 in sarcoids compared to normal skin; however, the more frequent localization of BPV-positive cells, right near the dermo-epidermal junction [49,50], could explain the higher expression of Ki67 and bcl-2 observed in epithelial portion and in dermo-epidermal junction.
Conclusion
Concluding, in this study we hypothesized that VEGF might have a role in sarcoid development, by altering ECM homeostasis, through a selection of a quiescent population of fibroblast, leading to an impaired degradation and excessive accumulation of ECM [14]. This also seems to be also supported by our previous study, which documented that the excessive and progressive deposition of connective tissue (collagen) in sarcoid is not only the result of elevated synthesis by fibroblasts, but it is also caused by a deficiency in matrix degradation due to an alterated expression of MMPs and TIMPs [15].
Currently, despite the numerous sarcoid treatment choices available, not all sarcoids are responsive. Hence, the importance of a better knowledge of equine sarcoid development. Further studies in the near future are needed to investigate the association between equine sarcoid and tissue . 40×; f Secondary-only negative control for bcl-2. 40×; g bcl-2 immunoreactivity was weak in few dermal fibroblasts (arrow). 40×; h Secondary-only negative control for bcl-2 40×; i Numerous small blood vessels, immunolabeled with vWF, often appered irregular in shape and without a distinct lumen (arrow). 20×; l Secondary-only negative control for bcl-2. 20×
Tumor samples
A total of 25 (S1-S25) equine sarcoids (each from a different horse), clinically identified based on their gross morphology according to Pascoe and Knottenbelt [51], were obtained from affected horses, which underwent surgery routinely, adhering to a high standard (best practice) of veterinary care, after in- Table 1). Sarcoid tissues used in this study were known to be BPV positive [48].
As controls, tissues from 5 normal skin BPV positive samples (N1-N5), were taken during necropsy from five healthy horses. We did not seek committee approval as all samples analyzed were not collected as part of experimental clinical veterinary practices but as routine diagnosis and treatment, according to Directive 2010/63/EU (art. 1 c. 4) on protection of animals used for scientific purposes. All samples were 10% formalin fixed, paraffin-embedded for routine histological processing and stained with haematoxylin and eosin for light microscopy study.
Additional five sarcoids (S26-S30) and 2 normal skin samples (N6-N7) were collected as above and immediately frozen at − 80°C in order to allow Western blotting analysis.
Immunohistochemistry
Paraffin sections of equine sarcoids (S1-S25) and normal skin (N1-N5) from healthy horses were dewaxed in xylene, dehydrated in graded alcohols and washed in 0.01 M phosphate-buffered saline (PBS), pH 7.2-7.4. Endogenous peroxidase was blocked with hydrogen peroxide 0.3% in absolute methanol for 30 min. The immunohistochemical procedure (streptavidin biotin-peroxidase method) for the detection of VEGF, bcl-2, Ki67 and vWF was the same as that used by the authors in a previous study [15]. The immunolabelling procedure included negative control sections incubated with PBS instead of the primary antibodies (Fig. 2b, d, f, h, l). Primary antibodies, dilutions and antigen retrieval techniques used in this study are listed in Table 3. Primary antibodies were diluted in phosphate-buffered saline (PBS) and applied overnight at 4°C. After 2 washes in PBS, MACH 1 mouse probe (Biocare Medical, LLC, Concord, CA, USA) was applied for 20 min at room temperature. After, MACH-1 Universal HPR-Polymer (Biocare Medical) was applied for 30 min at room temperature. To reveal immunolabelling, diaminobenzidinetetrahydrochloride was used as a chromogen, and haematoxylin as counterstain.
Scoring of immunoreactivity
The intensity of immunolabelling in each specimen, for each antibody, was scored by two independent observers (MM and KP) under blinded conditions, as performed in a previous study [52]. Briefly, for each sample the immunoreactivity was scored from negative to strong, as follows: n.a., not assessable; − negative staining; +/− weak immunolabelling; + moderate immunolabelling; ++ extensive and strong immunolabelling. Moreover, the number of positively-labelled cells was established by counting 1.000 cells in 10 fields (See figure on previous page.) Fig. 3 Western blotting analysis of VEGF, vWF, bcl-2 and Ki67 protein expression in equine sarcoids (S) and normal skin samples (N). a VEGF was expressed in all the analysed samples, with higher protein levels in S29 and S30. Hela whole cell lysate was run along with equine samples as positive control. The membrane was re-probed for β-actin to allow normalization. b Densitometric values were measured and expressed as VEGF/ actin ratio. c vWF was detectable in sarcoids but not in normal skin samples. Saos-2 whole cell lysate was run along with equine samples as positive control. The Saos-2 box is cut from the same membrane at a different exposure time and properly aligned according to the molecular standard loaded onto the gel. The membrane was re-probed for β-actin to allow normalization. d Densitometric values were measured and expressed as vWF/actin ratio. e bcl-2 was expressed at variable levels in the analysed samples. Hela whole cell lysate was run along with equine samples as positive control. The membrane was re-probed for β-actin to allow normalization. f Densitometric values were measured and expressed as bcl-2/actin ratio. g Variable expression of Ki67 protein in the analysed samples. Hela whole cell lysate was run along with equine samples as positive control. The Hela box is cut from the same membrane at a different exposure time and properly aligned according to the molecular standard loaded onto the gel. The membrane was re-probed for β-actin to allow normalization. h Densitometric values were measured and expressed as Ki67/actin ratio at 400× magnification (40× objective 10× ocular) and results were expressed as percentage.
Microvessel density and vascular parameter measurements
Microvessel density (number of vessels per mm 2 ) and vascular parameter measurements (vessel area and perimeter) were performed on vWF immunostained sections, using a free image analysis software (Image J). For each tumor 10 fields, randomly selected, were captured at 400× magnification (40× objective 10× ocular) with a microscope (E-400; Nikon Eclipse, Tokyo, Japan) coupled to a videocamera (TKC1380E; JVC, Tokyo, Japan), stored in the digital memory, and shown on the monitor. The fields were examined and manual outlining of immunolabelled microvessels was performed; microvessel density, areas, perimeters were then calculated based on image analysis, and results were expressed as mean (X) and standard deviation values (±SD) ( Table 2). Statistical assessment was performed using a Student t-test by analysis of variance (ANOVA) and p-value< 0.05 with 95% confidence interval (CI) was considered statistically significant. PDGF: platelet-derived growth factor; S: samples T: sarcoid samples; TIMP: Tissue inhibitor of metalloproteinase; VEGF: Vascular endothelial growth factor; vWF: Von Willebrandt factor PM has conceived the study, coordinated the group and drafted the manuscript; MM together with PM, has participated in the conception and design of the study, in analysis and interpretation of histological and immunohistochemical data and drafted the manuscript; BR and GB have been involved in revising the manuscript for important intellectual content; IP and KP carried out the histological and immunohistochemical studies, taking part in western blotting analysis; GA and IP carried out the molecular studies, taking part in western blotting analysis. All authors read and approved the final manuscript.
Protein extraction and SDS PAGE/western blotting
Author's information MM is employed as a researcher at the Department of Veterinary medicine and animal production; BR and GB are employed as associate professors at the Department of Veterinary medicine and animal production; PM is employed as a full professor at the Department of Veterinary medicine and animal production; IP is a student trainee for the degree thesis; GA is a postdoc research fellow at Department of Veterinary medicine and animal production; KP is PhD fellow at Department of Veterinary medicine and animal production. | 3,939.6 | 2018-09-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Weighted residual network for SAR automatic target recognition with data augmentation
Introduction Decades of research have been dedicated to overcoming the obstacles inherent in synthetic aperture radar (SAR) automatic target recognition (ATR). The rise of deep learning technologies has brought a wave of new possibilities, demonstrating significant progress in the field. However, challenges like the susceptibility of SAR images to noise, the requirement for large-scale training datasets, and the often protracted duration of model training still persist. Methods This paper introduces a novel data augmentation strategy to address these issues. Our method involves the intentional addition and subsequent removal of speckle noise to artificially enlarge the scope of training data through noise perturbation. Furthermore, we propose a modified network architecture named weighted ResNet, which incorporates residual strain controls for enhanced performance. This network is designed to be computationally efficient and to minimize the amount of training data required. Results Through rigorous experimental analysis, our research confirms that the proposed data augmentation method, when used in conjunction with the weighted ResNet model, significantly reduces the time needed for training. It also improves the SAR ATR capabilities. Discussion Compared to existing models and methods tested, the combination of our data augmentation scheme and the weighted ResNet framework achieves higher computational efficiency and better recognition accuracy in SAR ATR applications. This suggests that our approach could be a valuable advancement in the field of SAR image analysis.
Introduction
Due to its ability to operate independently of atmospheric and sunlight conditions, synthetic aperture radar (SAR) offers advantages over optical remote sensing systems.Automatic target recognition (ATR) is a crucial application of SAR systems, traditional techniques relied on handcrafted features such as the shape, size, and intensity of objects in the images (Oliver and Quegan, 2004).However, these techniques faced limitations as they required manual feature extraction and were susceptible to variations in conditions, object orientations, and configurations Wu et al. (2023a) and Yuan et al. (2023).In recent years, numerous approaches have emerged with the advancement of learning algorithms such as generative neural networks, multilayer autoencoders (Wu et al., 2022), long shortterm memory (LSTM), and highway unit networks (Deng et al., 2017;Lin et al., 2017;Song and Xu, 2017;Zhang et al., 2017).However, it is important to note that even state-of-the-art machine learning algorithms may encounter challenges when applied to SAR ATR, such as the limited availability of training samples and the issue of model overfitting.
To address these challenges, Chen et al. (2016) have introduced all-convolutional networks (A-ConvNets) as a solution, reducing the number of free parameters in deep convolutional networks and thus mitigating the overfitting problem caused by limited training images.Furthermore, several SAR image data augmentation methods have been proposed in recent years, such as the works by Zha (1999), Ding et al. (2016), Wagner (2016), Xu et al. (2017), and Pei et al. (2018a), aiming to tackle the issue of limited training data.
In order to enhance the training data for SAR target recognition, several methods have been proposed.Zha (1999) suggested generating artificial negative examples by permutating known real SAR images to increase the dataset size.Wagner (2016) utilized positive examples to improve robustness against imaging errors.Pei et al. (2018a) developed a multi-view deep learning framework that generates a large amount of multiview SAR data for training.This approach expands the training dataset by incorporating the spatial relationships between target images, resulting in improved recognition accuracy.Additionally, techniques such as suppressing speckle noise through fusion filters (Xu et al., 2017) and adding simulated speckle noise with varying parameters to training samples (Ding et al., 2016) were employed to enhance the SAR image data.
Among deep learning networks, Convolutional Neural Networks (CNNs) appear to be the most popular choice for SAR target recognition (Chen et al., 2016).However, severe model overfitting related to deep CNNs in SAR ATR was observed, leading them to propose an alternative solution called all-convolutional networks (A-ConvNets) to reduce the number of free parameters.A-ConvNets consist of sparsely connected layers instead of fully connected layers, providing a means of adjusting the model training process by improving network architecture.
There have been additional studies combining CNNs with assistant approaches, particularly in the context of data augmentation (Zhang et al., 2022;Wu et al., 2023b).The data augmentation methods used in SAR ATR can be broadly categorized into spatial information-related methods (Wagner, 2016;Pei et al., 2018a) and speckle noise-related methods (Xu et al., 2017).For spatial information-related approaches, Pei et al. (2018a) proposed a multiview deep learning framework that generates a large amount of multiview SAR data.This includes combinations of neighboring images with different azimuth angles but the same depression angle.By expanding the training dataset through this multiview SAR generation system, the spatial relations among target images are taken into account, resulting in higher model accuracy.Another typical method involves generating artificial images through distortion and affine transformation (Wagner, 2016).
Regarding the approach related to speckle noise, Xu et al. (2017) proposed a data augmentation technique utilizing a fusion filter-based noise suppression approach.This approach aims to address the low recognition rate and low robustness of traditional classification methods toward speckle noise.Other works have also focused on incorporating speckle noise characteristics in data augmentation techniques (Chierchia et al., 2017) and CNN models (Ma et al., 2019).Also, researchers are seeking to modify traditional CNN structures to better cater to SAR ATR requirements.These efforts include altering the learning parameters (Pei et al., 2018b), optimizing the network structure, and integrating speckle noise-related factors during model training (Kwak et al., 2019).In their work, the speckle noise was first suppressed using the fusion filter, and then the noise-suppressed images were used for network training to enhance model accuracy.
In SAR ATR tasks, CNNs have been extensively applied due to their effectiveness.Neural network structures, such as convolutional highway units, have been employed to train deeper networks with limited SAR data (Lin et al., 2017).However, it is important to consider the special characteristics of SAR images and adjust them accordingly to network models.
Although existing SAR ATR works have primarily utilized machine learning frameworks, particularly neural networks, and made significant efforts in adapting SAR images to network models, SAR images require special attention due to their uniqueness as remote sensing data.For instance, the application of deep convolutional highway units demonstrated promising results in training deeper networks with limited SAR data, the introduction of extra parameters, and the potential invalidation of layers due to shortcut connections need to be considered (Lin et al., 2017).
Literature has shown that data augmentation, particularly noise-related methods, can improve model accuracy (Ding et al., 2016).Some works have been done to simulate and incorporate speckle noise with different parameters into the training samples (Ding et al., 2016).However, evaluating handcrafted images against ground-truth data and predicting real-world recognition processes presents challenges.It is also important to consider image samples with noise cancellation in addition to noise addition, as both can contribute to the network training process.
Furthermore, to address the limitations of the CNN structure, other improvements can be considered in terms of the training process.CNNs are known for their strong feature extraction capability, resulting in success in image processing-related areas.However, when applying CNNs to SAR ATR, it is crucial to address the limited quantity of ground truth images, which are more difficult to acquire compared to optical RGB format images (Hochreiter and Schmidhuber, 1997;He et al., 2016).Overfitting can become a problem when training CNN models on SAR data.
Motivated by these considerations, this paper proposes a modified version of the Residual Network (ResNet) for SAR ATR, incorporating data augmentation to enhance recognition accuracy.Specifically, a residual strain control is introduced to modify the ResNet structure proposed by He et al. (2016), which has demonstrated superior training depth and accuracy compared to other CNNs.The proposed modification reduces training time and enlarges the SAR image dataset by both canceling and adding speckle noise, leading to improved recognition accuracy.Experimental results show that the proposed weighted ResNet, combined with data augmentation, enhances computational efficiency and recognition accuracy.
The main contributions of this paper can be summarized as: 1) This paper proposes a data augmentation method related to speckle noise in SAR images, which enhances the size and quality of the SAR image dataset.This augmentation, which involves both the addition and removal of noise, resulted in a more robust and accurate CNN model for SAR ATR.
2) A weighted ResNet is proposed which incorporates a unique residual strain control factor in its framework.By adjusting the residual strain of each weight layer, the weighted ResNet managed to enhance the model's computational efficiency, accuracy, and convergence speed, offering a major step in model optimization.
3) This paper presents comprehensive experiments to validate the effectiveness of the proposed algorithm.It further compared the weighted ResNet with other prominent CNNs, verifying its superiority in terms of training depth, model accuracy, and accelerated convergence.
The rest of the paper is organized as follows: Section 2 presents the proposed data augmentation method based on noise removal and addition.Section 3 provides details on the design of the modified residual network.Section 4 presents experimental results, while Section 5 presents the conclusions.The weighted ResNet structure includes a residual strain control factor added to the last layer of each shortcut unit.Compared with other CNNs, the improved network structure has advantages in terms of training depth and model accuracy, as well as accelerated convergence compared to the original ResNet.For data augmentation, an approach incorporating speckle noise addition and cancellation is proposed, resulting in an expanded dataset encompassing both ground-truth and noisy samples.Efficient data augmentation and improved network model accuracy in SAR ATR are achieved compared to other methods by rearranging the training and test datasets.
Data augmentation methodology
In this section, we shall present a data augmentation method based on the noise perturbation.More precisely, we augment the dataset by both canceling and adding noise.
. Speckle noise in SAR images
It is known that SAR imaging suffers from speckle noise.Assume that the radar works under single looking mode, the observed scene can be modeled with multiplicative noise as where I represents the observed intensity, s is the radar cross section (RCS) and n denotes the speckle noise.The amplitude of the RCS obeys exponential distribution with unit mean and the speckle noise is a kind of multiplicative noise.Hence, to generate a SAR image without speckle noise, we first obtain the speckle noise estimate by dividing the ground-truth images by the RCS estimate as where ŝ represents the RCS estimate obtained by applying the median filter. .
Noise based data augmentation
Unlike existing data augmentation approaches, we propose to expand the dataset via noise suppressing as well as noise adding.Following ( 1) and (2), it is not difficult to imagine that we can utilize the estimated speckle noise n to enlarge the training dataset by adding the speckle noise through multiplication and canceling suppressing through division.By doing so, it is able to get lower signal to noise (SNR) images and higher SNR images, which can be expressed as For data augmentation, both the lower SNR images and higher SNR images are taken as effective support.
Deep residual network design
In this section, we shall present the weighted ResNet structure, which has shortcut block units modified by introducing a residual strain control parameter in the second convolutional layer.The weighted ResNet results in less training time compared to its original counterpart.
. Network structure unit
As evaluated in the ILSVRC 2015 classification task, ResNet achieves a 3.57% error on the ImageNet test set, which won 1st place (He et al., 2016).Equipped with shortcut connections, ResNet excels in both learning depth and recognition accuracy compared to plain convolutional neural networks.The essential idea of the ResNet is that it learns the residual function instead of the underlying mapping.The residual function, defined as the difference between the underlying function and the original intensity function (input), automatically includes reference from the input.However, in common CNN networks, the mapping function is learned as a new one in the stacked layers.In other words, the layers are reformulated as residual functions with reference to the layer inputs rather than learning unreferenced functions.
It may have overwhelming advantages, but problems also clearly exist.While conducting experiments with popular networks, we found that ResNets are less likely to converge even after other networks are well trained.This computational shortcoming drove us to explore the reason behind it and left room for improvements.Consequently, we introduced a weighted ResNet variant in our MSTAR data implementation.For a clearer explanation, the supporting theory and analysis will follow the introduction of the network structure.
Figure 2 shows a single shortcut connection of the weighted ResNet, where the fourth and back layers are skipped for the sake of simplicity.The underlying mapping function H(x) is defined as where x denotes the input intensity and W s is a linear projection which matches the dimensions of x with an modified residual function F(•) as where σ (•) stands for the rectified linear unit (ReLU) function and the biases are omitted for simplicity, and c r ∈ [−0.5, 0.5] denotes the residual strain control parameter.As can be seen from Figure 2, the residual unit is modified by adding a residual strain control after the ReLU process.During model training, the control parameter c r is constrained by where η is the learning rate, c r is the graident of parameter c r .Figure 3 draws a single shortcut connection of the proposed improved ResNet.Again, it can be found that, compared to the basic ResNet, the main difference is that a residual strain control unit is added.In this figure, the two blocks are termed identity block (IB) and transformational block (TB), respectively.
. Weighted ResNet structure
In brief, the weighted ResNet involves 20 convolutional layers, in which an average pooling layer and a dense layer are the last two layers.Specifically, it takes the following form as The main architecture and flow chart of the weighted ResNet are given in Table 2 and Figure 4, respectively.
In weighted ResNet, a weight factor, denoted as C r , is introduced to the residual connections of the traditional ResNet architecture.This mechanism can assign different weights to different layers or features depending on their contribution to the final output.This allows important features to have more impact on the output and less significant features to have less impact.
The intention behind introducing a weighting mechanism varies depending on the specific application or task at hand.For example, in some contexts, introducing weights can help deal with class imbalance in the dataset.In other cases, it may be used to increase model robustness against noise or other irregularities within the data.The weights may be learned during training, using backpropagation and gradient descent, or might also be assigned based on preset criteria defined by the researchers.The methodologies can vary in different incarnations of weighted ResNet models.
. Residual strain control for ResNet modification
Although deeper network depth and higher model accuracy are well-noticed, ResNets suffer from untoward convergence.We may first find the outstanding learning ability surprising, but it prompts further thinking and exploration post-implementation.The pain point arises when the residual information and the underlying information are merged.As observed in the Basic and Basic Inc architectures, ReLU will be applied on the residual information channel before the merger.This eventually hampers the seamless integration of the two channels.For the underlying channel, the value is in the range of (−∞, +∞), whereas the value set of the residual channel is significantly limited to merely positive after the ReLU operation.The raw merger operation in original ResNets leads to a bias far from the underlying channel, which suppresses the cognition.This will not only shorten the representation ability of networks, but also tie down the overall training process.Therefore, ResNets fall behind other CNNs in convergence inevitably.
To keep the goodness as well as speed up the training, the residual strain control parameter plays a role.As taken values in the range of [−0.5, 0.5], the residual control parameter c r shifts the residual channel to both negative and positive values.And this turbocharge in turn results in a better fusion of the two channels.Significant improvements in convergence have been achieved in modified ResNets after the multiplication of c r .
It is worth noting that our optimization method does not add any extra structures or computational operations, thus maintaining the computational complexity, measured in FLOPS, at the same level as the base ResNet model.
. Network training
Given the image dataset with S training samples and the corresponding ground-truth labels x i , y i , i ∈ S, we adopt a training cost function with L 2 regularization as where p y i represents the predicting probability for each target class, θ is the trainable parameter of the network, λ 1 and λ 2 are the L 2 regularization parameters.
On the basis of the cross-entropy loss, the cost function has been equipped with two L 2 regularization factors as terms.One corresponds to the model parameters, denoted by θ , and the other for gradient computation, which has been discussed in depth in previous work.In this work, we employ one of the most popular gradient updating techniques, the momentum stochastic gradient descent (SGD) (Ruder, 2017;Tian et al., 2023) to optimize the modified residual network, which will be discussed briefly in this subsection.It is also important to note that the residual strain control parameter c r is also being updated during the training process using the error back-propagation method.
SGD with momentum roots in physical law of motion to go pass through local optima.By linearly combining the gradient and the previous update, momentum maintains the update at each iteration.This keeps the update steps stable and avoids chaotic jumps.The following formulas show how SGD with momentum works: where θ i denotes the model parameter to be estimated, θ i is the ith gradient updates, µ is the momentum coefficient, α is a single learning rate, and ∇L(θ i ) represents the cost function degrade.Compared with plain SGD, with the accumulating speed, the momentum SGD step will be larger than the SGD constant step.Thus, this trick will not only help to achieve global minimum but also increase robustness.
Experiments . Dataset
We evaluate our proposed method using a benchmark dataset from the Moving and Stationary Target Acquisition and Recognition Program published (Zhao and Principe, 2001) To train the weighted ResNet, all the images we used in our experiments are cropped to 100 × 100 pixels, with the target located at the center.We primarily use eight types of target images, and the number of images used for training and testing is listed in Table 1.The cropped image dataset contains 8 types of military ground targets, namely T62, BRDM2, BTR-60, 2S1, D7, ZIL131, ZSU-234, and T72.Images of each target are collected at depression angles of 15 • and 17 • and then turned at an angle of 360 • .We note that one uses images with a depression angle of 15 • for training and images with 17 • for testing.However, this may shorten the recognition ability of the trained deep learning network because of the missing spatial information that could have been included.We stick with this idea and do training experiments with images of 15 • and 17 • with depression angle.
In order to expand the capacity of the original dataset by removing and adding noise (different filtering or noise distribution parameters), in our experiments, we use cropped images of 8 targets to generate image variants, and 400 images are randomly selected for each target.
For illustration purposes, we take one of the T62 SAR images as an example to demonstrate the noise removing and adding behaviors.Figures 6A, B show the original optical image and the SAR image.Figures 6C-E draw the noise-removing images generated through median filtering with the templates of 3 × 3, 5 × 5, and 7 × 7, respectively.Figures 6F-H depict the noise-added images with multiplied exponentially distributed speckle noise with means (termed as M) of 0.5, 1.0, and 1.5, respectively.Finally, the whole noise canceled and added images generated from the cropped images are listed in Table 1.According to our design, the ./fnbot. .
FIGURE
The original optical image (A) and SAR image (B), and noise perturbed SAR images (C-H).SSIMs for the filters of both noise removal and noise adding are set by 90%, 82.5%, and 75%, respectively.
. Classification results
We first conducted experiments to validate our proposed speckle noise-based method.The confusion matrix of our weighted ResNet can be found in Tables 2, 3 as comparisons of data augmentation.The classification accuracy of weighted ResNet using non-augmented training data is 94.56% (7,269/7,680).Table 2 shows the confusion matrix of weighted ResNet using nonaugmented training data.Each row in the confusion matrix represents the actual target class, and each column denotes the class predicted by the weighted ResNet.The classification accuracy of weighted ResNet using augmented training data is 99.65% (7,653/7,680).Table 3 shows the confusion matrix of weighted ResNet using augmented training data.Each row in the confusion matrix represents the actual target class, and each column denotes the class predicted by the weighted ResNet.
The classification accuracy of weighted ResNet with data augmentation is up to 99.65%, increasing by almost 5.1%.Additionally, the weighted ResNet structure has a relatively lower classification performance on the ZIL131 (92.71%) and BTR-60 (92.81%), followed by T62 (93.23%).After the dataset extension, the classification accuracy of ZIL131 is up to 98.96%.A similar improvement is seen in the BTR-60 and T62, each with nearly a 5% increment.This indicates that the speckle noise perturbation based data augmentation method is valid.Moreover, the recognition rate of armored personnel carriers is relatively low, which suggests that the distribution of those targets is near in the feature space.The above results are consistent with the trends observed in which has been published in Kang et al. (2017), a contributor in SAR ATR feature exaction.Further, in Figure 7, we show some instances of misclassification, where we selected only one example from each category for presentation.A→B means cases where a sample with the label A is incorrectly classified as B by the model.
. Network performance comparsion
In our experiments on weighted ResNet and ResNet, the following setups are applied: the mini-batch size is 128, the epoch number is 160, the dynamic learning rates are 1.0 for the first 80 epochs, 0.1 for the next 40 epochs and 0.01 for the remaining epochs, the momentum coefficient starts from 0.9.In order to illustrate its advantages, the weighted ResNet is compared to its original counterpart (He et al., 2016), SVM (Zhao and Principe, 2001), A-convNet (Chen et al., 2016), and Ensemble CNN (Lin et al., 2017), CNNs [ (Morgan, 2015;Ding et al., 2016;Furukawa, 2017), as well as other two deep neural networks [AlexNet (Krizhevsky et al., 2012) and VGG16 (Simonyan and Zisserman, 2014)] for SAR image classification.As shown in Table 4, there is a 0.81% accuracy rise for CNN-3, while nearly 3.57% on AlexNet, and over 4% increase noted in VGG16, ResNet, and weighted ResNet.Table 4 clearly shows that ResNet has a higher recognition accuracy than other networks.Other modified networks without data augmentation can achieve accuracy over 99% (Chen et al., 2016;Lin et al., 2017).
Discussion and conclusion
In this paper, we presented a weighted ResNet model for ATR.Our method tackled problems usually associated with conventional CNN models such as overfitting due to the constrained quantity of ground truth images and the unique complexities presented by speckle noise in SAR images.We incorporated data augmentation and introduced a distinctive residual strain control method, which together contributed to the generation of a weighted ResNet with increased computational efficiency, boosted recognition accuracy, and faster convergence.The data augmentation method proposed in this paper, which involved the addition and cancellation of speckle noise, successfully expanded the quality and size of the SAR image dataset and made the model more resilient.This step was critical, as it provided a practical solution to the issue of scarce ground truth images.
Our novel introduction of a residual strain control to adapt the ResNet model contributed to significant improvements in model efficiency and recognition accuracy and reduced training time.It efficiently managed the residual strain of each weight layer, leading to faster convergence and improved optimization.
Experimental results displayed the superiority of our proposed weighted ResNet model when compared to other prominent CNNs.The accelerated convergence, remarkable training depth, and improved model accuracy showcased our model's effectiveness and robust capabilities in SAR ATR.
While our research and results are promising, the continuous advancement in AI and deep learning applications will consistently present avenues for growth.Future work can focus on further enhancements of the weighted ResNet model for improved model stability and generalization capabilities.Additionally, exploring more sophisticated data augmentation techniques can help in producing even more robust models capable of handling different SAR ATR scenarios.Applying the developed model to other similar imaging techniques can also be an interesting aspect to look into.
FIGURE
FIGUREData augmentation and network training process.
Figure 1
Figure1describes the overall system of the proposed method.It is noticed that the whole process can be in general divided into three parts: data augmentation process, model training, and classification accuracy test.Following (1) and (2), it is not difficult to imagine that we can utilize the estimated speckle noise n to enlarge the training dataset by adding the speckle noise through multiplication and canceling suppressing through division.By doing so, it is able to get lower signal to noise (SNR) images and higher SNR images, which can be expressed as
FIGURE
FIGUREComparison of the identity block [IB, (left)] and transformational block[TB, (right)] between basic ResNet with our proposed weighted ResNet.
FIGURE
FIGURENetwork architecture for weighted ResNet.
by the US Defense Advanced Research Projects Agency and the US Air Force Research Laboratory.The dataset consists of X-band SAR images of different types of military vehicles (e.g., APC BTR60, Main Tank T72, and Bulldozer D7) with elevation angles of 15 • and 17 • .The image resolution is 0.3m × 0.3 m, some example images of different classes are shown in Figure 5.
FIGUREExamples of misclassified samples in each category, with only one example selected per category.The text below the image, A→B, signifies that the expected category is (A), but the model mistakenly classified it as (B).
FIGURE
FIGUREComparison of the accuracies vs. training time.
TABLE List
of noise perturbed SAR images (data augmentation).
TABLE Confusion matrix of the weighted ResNet (without data augmentation).
TABLE Confusion matrix of the weighted ResNet (with data augmentation).
TABLE
Accuracy comparison with other methods.Thus no unusual signs were observed during the training process.Another reason may refer to the experiences gained while conducting network training experiments on different network structures with large volumes of other data sets.Here we train the ResNet and weighted ResNet without loading pre-trained models.The method is robust against noise and momentum SGD training will skip local optimal solutions. | 6,200.6 | 2023-12-19T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Novel spectral unmixing approach for electron energy-loss spectroscopy
Electron energy-loss spectroscopy is a well-established technique for characterizing low-Z elements in materials. Typically, a measured spectrum image is contributed from several materials when the composition of the specimen is sophisticated. Therefore, decomposing the distribution of each endmember is crucial to material scientists. In this article, we combined multiple linear least-squares fitting and k-means clustering to resolve the aforementioned issue. In addition, our method can nearly extract the true endmembers in materials unsupervisedly. Simulated and experimental data were employed to evaluate the performance and feasibility of our method.
Introduction
A spectrum image of a scanning transmission electron microscope (STEM) allows to collect high-resolution structural and chemical information simultaneously. The two most common spectroscopic techniques in STEM are electron energy-loss spectroscopy (EELS) and energy-dispersive x-ray spectroscopy (EDS), respectively. Although EDS is more efficient in detecting high-Z elements, EELS, on the contrary, is more sensitive to low-Z chemicals [1][2][3][4]. A typical issue of EELS is that the measurement is a mixed spectrum; therefore, signal processing becomes essential when exploring a deep insight into materials, especially to extract the useful signal from the dataset [5][6][7][8][9][10].
Spectral unmixing is one of the tools to decompose the mixed spectrum into a collection of distinct spectra or endmembers-the absorption of individual elements or compounds at a specific range of energy [11,12]. Several algorithms for EDS/EELS data processing, such as principal component analysis (PCA), independent component analysis (ICA), multiple linear least-squares (MLLS) fitting, and non-negative matrix factorization (NMF), have been applied to extract the endmembers from a spectrum image.
PCA and ICA are used to reduce the dimension of the dataset. However, PCA is challenging to extract physically meaningful endmembers because the individual elements in the spectrum can not be identified. Recently, ICA was applied to extract endmembers, but the ICA route requires denoising and is highly timeconsuming [13][14][15][16][17].
MLLS fitting and NMF are both vector quantization techniques aiming to decompose the signal into a linear combination of principal components. It has been demonstrated that the MLLS fitting can solve the issue of edge overlapping in EELS by manually assigning principal spectra as [9,18,19]. On the contrary, the NMF can automatically extract the meaningful components with the positivity constraints on the weighting of the components. Although NMF has been widely used in image processing, text mining, and spectral unmixing, great efforts are required to enhance the efficiency [20][21][22][23][24].
In order to extract physical meaningful endmembers, the cluster analysis, which groups the data with similarity, would be one of the solutions. Various algorithms have been developed for clusterings, such as agglomerative clustering, density-based spatial clustering of applications with noise, and k-means clustering Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. [25][26][27]. K-means clustering is well-known as an unsupervised routine that groups the data only depending on the similarity and has been widely used for the data processing of electron microscopy. Most of the applications focus on image processing but rarely on spectroscopy [27][28][29][30][31][32][33][34].
In EELS, the spectra are recorded by multiple channels corresponding to different electron energy at a single position. It can be considered as every single spectrum spanned in the multi-dimensional space, and the superposition forms the measured spectra. Therefore, the metric of similarity to each spectrum can be defined as the Euclidean distance in the energetic space [27]. The centroids of each cluster can be represented as the reference spectra for MLLS routine. However, in most of the case, we cannot extract the true endmember by the k-means clustering.
In this article, we propose a novel spectral unmixing approach named kMLLS clustering. By taking both advantages of the MLLS fitting and k-means clustering, the true individual endmembers can be extracted distinctly from a measured spectrum image.
The feasibility and robustness were examined by both simulated and experimental data.
Validation through simulation
The multiple linear least-squares (MLLS) fitting assumes that the nth spectrum in a spectrum image can be formulated as a linear combination of p reference spectra as follows where S n (E cm ) represents the mth energy channels selected to acquire the spectrum image. β i refers to the fitting coefficient of each reference spectra. R i (E cm ) is the endmember in a spectrum image. The main purpose of performing MLLS to a spectrum image is to solve β. The solution can be expressed as The relative fractional abundance of the pth reference spectrum is represented by β p , so that we can obtain the spatial distribution of each reference spectrum [18,19]. Usually, the k-means clustering algorithm, which groups data with the similarity defined by the Euclidean distance, is applied to extract the reference spectra or the endmember from a spectrum image. However, the EELS signal background decays exponentially, which may result in large differences of the similarity if we considered all the energy range during grouping. In order to avoid such situation, we cropped the energy region of interest before the analysis. After that, k cluster centroids were randomly selected from the dataset. Then each data point was assigned to the specific group depending on the distance to the centroid. New centroids were computed as the center of mass of the data points in individual groups. The iterative process was not terminated until the centroids converged. To determine the proper value of k, one would need to compute the sum of squared errors (SSE) from a range of k clusters and determine the k-value on the elbow of the SSE curve [25,30]. The physical interpretation of k-means clustering can be realized as grouping the electron energy-loss spectra at specific positions into k clusters. The centroid indicates the average spectra of each cluster.
In most cases, the accurate phase of endmember could not be extracted by k-means clustering. We hence combine k-means clustering with MLLS fitting to overcome the difficulty. The flow-chart of kMLLS is shown in 1. First, we determined the preliminary k-value of the system and then performed the k-means clustering to obtain the candidates for the MLLS. Second, a brute-force algorithm enumerates all the combinations of centroids and then examines whether a centroid is composed of other two or more centroids. If the centroid is a linear combination (with positive coefficients) of other centroids, it will be removed from the reference spectra for further MLLS calculation. Third, we performed the MLLS routine pixel-by-pixel to obtain the coefficient of each reference. Fourth, the coefficients were employed to calculate the new reference. Finally, we repeated the procedures until no coefficients were changed beyond the tolerance of each reference. The endmembers would be obtained when the references converged.
Simulated spectra with different compositions of endmembers were generated by the EELS Advisor package in the GATAN DigitalMicrograph © software to demonstrate our kMLLS clustering algorithm [35]. In this study, the experimental parameters of JEOL JEM-ARM200F were used to simulate the spectra (primary energy: 200 keV; convergence angle: 44.2 mrad; collection angle of the EEL spectra: 29.6 mrad). To be more realistic, 10% Poisson noise was imposed on each spectrum. Although neither background subtraction nor denoising were needed in advance, the combination of the endmembers in the mixed region was assumed to be linear. All the data processing in this paper were implemented by the Numpy, Scikit-learn, and HyperSpy packages in Python [36,37].
The first example is a two-endmember system. The pure BN and C are located at two endpoints, and the abundant profiles were shown in figure 2(a). Figures 2(b), (c) shows the simulated noisy EELS spectra of BN and C, respectively. The elbow method was applied to find out the optimal k-value for clustering. The result of the elbow route presented in figure 3(a) indicates that the number of clusters is three, which is different from the It is notable that if we pick the mixture phase ( figure 3(b)) as one of the reference spectra, the k-means clustering always trap at the incorrect fractions shown in figures 4(a) and (e). By contrast, the kMLLS clustering still converges to nearly pure phases. For example, if figure 3(c) (nearly pure BN) and figure 3(b) (mixture BN and C) are selected as the reference spectra, the figure 3(b) in kMLLS clustering converges to the nearly pure C phase. A similar result was obtained when we pick figure 3(d) (nearly pure C) and figure 3(b) as the reference spectra. The results were shown in figures 4(b) and (f). This indicates no matter which references are selected for kMLLS, the true endmembers can be extracted if the k-value is correctly determined. It is notable that the kvalues are typically larger than the number of endmembers, and can be corrected by the brute-force algorithm. If the k-value is underestimated, it may result in negative fractions for some endmembers. In such a case, we can then manually determine the proper k-value for kMLLS clustering. An overlapped multiple-edge system, TiN-Ti-TiO 2 , is designed to demonstrate the feasibility of our method in advance. As shown in figure 5, the O K edge is located between Ti L 3,2 and Ti L 1 edges. The k-value determined by the SSE curve shown in figure 6(a) is three. The corresponding centroids of k-means clustering are shown in figures 6(b)-(d). Although the k-value is correctly determined, figures 6(c) and (d) are still mixed spectra. After we implemented kMLLS clustering, the weighting distribution and the endmembers are both correctly retrieved. The comparisons of k-means and kMLLS clustering are shown in figure 7. 3. Demonstrate the feasibility using experimental data Finally, we examined the feasibility of kMLLS clustering through experimental data. A line-scanned spectrum image of oxide-nitride-oxide (ONO) multilayer thin film was acquired using a JEOL JEM-ARM200F Cs-Corrected STEM with a GATAN Quantum 965 EELS camera at 200 keV. Since the EELS signal decays exponentially, inaccurate Euclidean distances might result from poor background subtraction. In reality, the background subtraction is usually problematic when multiple elements exist in the material. Therefore, we only select the region of absorption edge for kMLLS clustering. The advantage is that the features in the spectra dominate the value of Euclidean distances (i.e. similarity), thus alleviating the bias of clustering due to the background. Figure 8
Conclusion
In this paper, we have demonstrated that a novel algorithm, the kMLLS clustering, can successfully extract the endmembers in a spectrum image from both simulated and experimental data. In comparison with k-means clustering, no pre-knowledge or pre-selection of references is needed; therefore, the endmember extraction can be conducted unsupervisedly and lead to nearly true endmembers. We believe that the kMLLS clustering has great potential to in-line investigations and provides a significant insight into materials. | 2,485 | 2020-03-23T00:00:00.000 | [
"Materials Science"
] |
Exploring the Relationship between Obesity, Metabolic Syndrome and Neuroendocrine Neoplasms
Obesity is a major burden for modern medicine, with many links to negative health outcomes, including the increased incidence of certain cancer types. Interestingly, some studies have supported the concept of an “Obesity Paradox”, where some cancer patients living with obesity have been shown to have a better prognosis than non-obese patients. Neuroendocrine neoplasms (NENs) are malignancies originating from neuroendocrine cells, in some cases retaining important functional properties with consequences for metabolism and nutritional status. In this review, we summarize the existing evidence demonstrating that obesity is both a risk factor for developing NENs as well as a good prognostic factor. We further identify the limitations of existing studies and further avenues of research that will be necessary to optimize the metabolic and nutritional status of patients living with NENs to ensure improved outcomes.
Obesity and Human Health
Obesity, defined as a body mass index (BMI) of equal to or greater than 30, has been linked to increased rates of coronary artery disease, type 2 diabetes mellitus (T2DM), cancer, and mental health disorders [1][2][3]. Multiple studies have demonstrated an association between obesity and all-cause mortality, with one study estimating that increased BMIs accounted for 4.0 million deaths worldwide in 2015 [4,5]. The simplicity of BMI as a metric makes it a useful screening tool for obesity. However, BMI is known to be an imperfect metric, with some groups developing more advanced measures of body composition and adiposity that are more predictive of metabolic risk [6].
Obesity is a component of a cluster of metabolic disturbances that have been termed metabolic syndrome (MS). While various definitions for MS exist, the National Cholesterol Education Program's Adult Treatment Panel III (NCEP: ATP III) definition is easily applicable to clinical practice. The NCEP: ATP III panel defines MS as three or more of: central obesity (waist circumference of >102 cm in males or >88 cm in females), hypertriglyceridemia (triglycerides of ≥1.7 mmol/L), low high-density lipoprotein-cholesterol (HDL-C of <1.0 mmol/L in males or <1.3 mmol/L in females), hypertension (blood pressure of ≥135/85 or on medication), and a fasting plasma glucose of ≥6.1 mmol/L [7,8]. MS has also been identified to be an independent risk factor for the development of breast, bladder, and gastrointestinal malignancies [9].
This review article will summarize the existing evidence that obesity and metabolic syndrome directly impact cancer incidence and outcomes, with a particular focus on neuroendocrine neoplasms (NENs).
The Impact of Obesity on Cancer Risk and Outcomes
Obesity has been correlated with both increased cancer risks and poorer cancer outcomes in a number of studies [10,11]. In a prospective cohort study of more than 900,000 adults in the United States of America (USA), a BMI of at least 40 was associated with a greater risk of death from all cancers, with relative risks of 1.52 and 1.62 for men and women, respectively [11]. In 2016, the International Agency for Cancer Research performed a review of the literature and concluded that sufficient evidence exists for a preventative effect of the absence of excess body fat on the risk of developing cancers of the gastrointestinal tract, breast and ovary, as well as renal cell carcinoma, meningioma, thyroid cancer and multiple myeloma [12]. The proposed mechanisms for the impact of obesity on cancer risk include its association with insulin resistance, high levels of insulin-like growth factors, the production of endogenous sex steroids and a state of chronic inflammation [13,14].
Beyond having an impact on cancer risk, other studies have asked whether weight loss interventions can affect the outcomes of individuals already diagnosed with cancer. A recent systematic review specifically examined the effects of weight loss interventions on mortality, cardiovascular disease and cancer [15]. While high quality evidence from 34 trials demonstrates that weight loss interventions can reduce all-cause mortality, very low quality evidence supports a specific benefit on cancer-related mortality (risk ratio 0.58, 95% confidence interval (CI) 0.30 to 1.11) [15].
The effects of weight changes in cancer outcomes is further complicated by cancer cachexia, a phenomenon of weight loss, anorexia and muscle wasting that has been implicated in the mortality of at least 20% of all patients afflicted with cancer [16][17][18]. While multiple definitions for cancer cachexia exist, a commonly used set of criteria decided through international consensus includes weight loss of >5% over a 6 month period in the absence of starvation, a BMI of <20 and any degree of weight loss of >2%, or sarcopenia as measured with appendicular skeletal muscle index (SMI; male < 7.26 kg/m 2 and female < 5.45 kg/m 2 ) and any degree of weight loss of >2% [18,19]. Importantly, interventions designed to address the nutritional needs of patients with cancer cachexia have demonstrated improvements in survival [20]. In a retrospective review by Gannavarapu et. al. of 3180 patients with thoracic or gastrointestinal malignancies [21], pre-treatment cancer-associated weight loss was identified in 34% of patients at diagnosis, and it was associated with reduced survival (hazard ratio (HR) 1.26, 95% CI 1.13 to 1.39). Weight loss during cancer treatment may also independently limit the dose of systemic therapies that patients receive while increasing the likelihood of toxicities [22]. While some limited evidence is available to support a beneficial impact of weight loss on cancer-specific outcomes, a holistic approach designed to meet the specific nutritional requirements of each cancer patient, rather than simply targeting a specific BMI level, is warranted in the management of advanced cancers.
Interestingly, some studies have actually observed improved outcomes in patients with obesity and cancer [23], a phenomenon that has been termed the obesity paradox [24]. This effect has been observed in cancers of the lung [25], kidney [26,27], breast [28] and colon [29], as well as hematologic malignancies [30,31]. In a large meta-analysis of 203 cancer studies performed by Petrelli et. al. [10], obesity was associated with reduced survival and increased risk of recurrence, with the notable exception of lung cancer, renal cell carcinoma and melanoma, where patients with obesity had better survival outcomes [10]. Several possible explanations exist for the obesity paradox, including issues that have been raised around experimental design and interpretation. Exposure to selection bias, the timing of when BMI is calculated and the existence of confounders that can decrease BMI such as cigarette smoking can make the studies challenging to interpret [24]. Cancer cachexia is well-established as a poor prognostic marker which can specifically impact the validity of post-diagnosis BMI as a metric [32,33]. The use of post-diagnosis BMI can also lead to reverse causation, where significant weight loss can be the result of advanced cancer, obscuring the impact of pre-diagnosis BMI on cancer outcome [24]. The use of pre-diagnosis BMI, or serial measurements of weight during disease course, may help address these concerns.
Other groups have also argued that BMI alone is not an adequate measure of obesity [34,35] as it is significantly influenced by factors such as age and sex [36] and does not take into account the difference between visceral and subcutaneous fat [37,38].
Regardless of the uncertainties surrounding the obesity paradox, one potential contributor for the positive impact of obesity on cancer survival is its influence on specific treatments. While obesity can be associated with more surgical complications [39] and negative impacts on chemotherapy efficacy [40], the chronic inflammatory state associated with obesity has been hypothesized to improve the efficacy of checkpoint inhibitors and other forms of cancer immunotherapy [41,42]. The validity of the obesity paradox in cancer remains controversial in the literature, and further research into the area will be necessary to define the potentially protective role of obesity in cancer mortality.
Neuroendocrine Neoplasms
NENs comprise a group of malignancies that originate from the neuroendocrine cells of a diversity of primary sites including the gastrointestinal tract, respiratory tract, larynx, central nervous system, thyroid, kidneys and urogenital system [43,44]. NENs generally arise sporadically, although they can rarely be associated with multiple endocrine neoplasia type 1 or other heritable cancer syndromes [45]. NENs may be broadly subdivided into well-differentiated neuroendocrine tumors (NETs) and poorly differentiated neuroendocrine carcinomas (NECs) [45]. The majority of NETs immunohistochemically express the typical neuroendocrine markers chromogranin A and synaptophysin [43]. Some NETs are considered functional, retaining the ability to secrete hormones such as gastrin, insulin, glucagon and vasoactive intestinal peptide (VIP), thereby resulting in characteristic clinical syndromes [46]. NETs may also retain the characteristic ability of neuroendocrine cells to produce and secrete the amine serotonin, which can result in a characteristic syndrome of diarrhea, peripheral vasomotor symptoms, bronchoconstriction and carcinoid heart disease, known as carcinoid syndrome (CS) [43,47,48]. However, the older terminology of "Carcinoid" tumor for gastroneteropancreatic (GEP) NETs has fallen out of favor as NETs have come to represent true malignancies [49]. In contrast to well-differentiated NETs, neuroendocrine carcinomas (NECs) are poorly differentiated tumors that express fewer neuroendocrine markers, have higher rates of nuclear atypia and proliferation and are associated with poorer overall outcomes [44].
The prognosis and management of NENs is dependent on primary tumor site, histological grade and tumor, node and metastasis (TNM) staging [50]. Gastroenteropancreatic NENs are graded based on the Ki-67 index and mitosis [45]. TNM staging systems exist for GEP, lung and thymic NETs [45,51]. The overall prognosis for NETs is relatively favorable, with 5-year survival rates in the range of 60-80% [47,50,52,53].
Obesity, Metabolic Syndrome, and Incidence of Neuroendocrine Tumors
There has been an increase in the reported prevalence of NENs over time, explained at least in part by improvements in cancer screening and NEN classification [47,52,54,55]. In a recent analysis of the Surveillance, Epidemiology, and End Results (SEER) program that identified 65,971 cases of NETs in the USA between 1973 and 2012, the age-adjusted incidence rate was found to have increased 6.4-fold to 6.98 per 100,000 individuals, with the most common sites being the lung and the gastrointestinal tract [55]. Changes in demographic and environmental factors may play an important role in the increased prevalence of NENs, as well. Specifically, several lines of evidence have pointed toward obesity and MS as risk factors for NETs (Table 1) [56][57][58][59][60][61]. For example, a 2016 meta-analysis of 24 studies identified elevated BMI and diabetes as the second most relevant risk factor for NENs of the stomach, pancreas and small intestine after family history [59]. A USAbased case-control study of 740 patients with NETs also identified diabetes mellitus as a significant risk factor of gastric NETs, with a particularly strong effect in women [60]. In a single-center case-control study comparison of 96 individuals with well-differentiated GEP-NETs and 96 matched controls from the general population, a statistically significant association was observed between GEP-NETs and MS, as well as individual factors such as waist circumference, fasting triglycerides and fasting plasma glucose [61]. A follow-up study from the same group further described that patients with MS and GEP-NETs are more likely to present either with lower grade tumors or at an advanced stage [62]. Taken together, multiple studies support a link between MS or its individual components and a risk of developing NETs of multiple tissues of origin.
Beyond the studies supporting MS as a risk factor for NENs, obesity itself seems to be an independent risk factor for NENs. Interestingly, there have been multiple observations of increased incidences of gastric NETs identified during routine endoscopic evaluation prior to bariatric surgery [63][64][65][66][67]. When extrapolated, these studies suggest that the incidence of gastric NETs ranges from 0.23-0.358% in obese patients in comparison to 0.001-0.002% in the general population [63,64]. Mechanistically, this may be explained by the increased gastric atrophy and G-cell hyperplasia associated with type 1 gastric NETs [64]. The colonization of H. pylori has been linked to the incidence of gastric NETs through the induction of multiple signaling pathways that lead to atrophic gastritis and the hyperplasia of enterochromaffin-like cells [68]. Additionally, the appendix has been identified as a preferential site of gastrointestinal NETs when an appendectomy is performed along with a bariatric procedure [69].
Several studies have directly examined the influence of BMI on the risk of developing NENs. In the study by Santos et. al. [61], visceral obesity, defined as a waist circumference of >80 cm for females and >94 cm for males, was reported as a risk factor for welldifferentiated GEP-NETs (OR 2.5, 1.4-4.6). Leoncini et. al. [59] also performed a metaanalysis of risk factors for NENs where two of the three case-control studies demonstrated an association of BMI with pancreatic NETs, with an adjusted summary effect estimate of 1.37 (95% CI 0.25-7.69, p < 0.001), although the data for the small intestine and rectum were inconclusive. Conversely, a study by Hassan et. al. [60] actually demonstrated a 60-70% reduction in the risk of developing pancreatic and small bowel NETs in overweight and obese individuals. Further complicating this matter, complex interactions exist between metabolism, the microbiome and the risk of developing NENs. Interestingly, a link has been established between inflammatory bowel disease (IBD) and NENs, and it may be the result of associated changes in the gut microbiome [68,70]. Overall, evidence exists for obesity and MS as independent risk factors for the development of NENs, although further studies are necessary to reconcile some of the controversial data in the literature and identify whether this relationship is exclusive to NENs of certain primary sites.
Despite the observations that obesity and MS are risk factors for NENs, it is also established that NENs are associated with changes in nutritional status that can lead to weight gain. Indeed, patients with GEP-NETs have been demonstrated to have a poorer overall nutritional status in comparison to the general population, including less frequent adherence to a Mediterranean diet and increased consumption of simple carbohydrates and polyunsaturated fats [71]. In certain instances, weight gain can also be a biological consequence of functional NENs rather than a risk factor for tumor initiation. Firstly, the ectopic secretion of adrenocorticotropic hormone (ACTH) from pulmonary NETs has been described, resulting in Cushing's syndrome and unintentional weight gain [72][73][74]. In one case series of 918 GEP-NETs and thoracic NETs, the prevalence of ectopic ACTH secretion was reported to be 3.2% and associated with poorer patient survival [75]. In these cases, definitive therapy such as surgical resection can result in weight loss, although specific therapy for hypercortisolemia such as Metyrapone can also be used [72,74]. Secondly, insulinomas are rare tumors that may occur sporadically or in association with multiple endocrine neoplasia type 1 (MEN1) syndrome, and they can also manifest with weight gain as patients attempt to relieve hypoglycemic symptoms by excess food intake [76][77][78]. Thirdly, the secretion of ghrelin from NETs may also act to maintain BMI in patients with metastatic disease and counteract the effects of cancer cachexia [79,80]. Lastly, an especially devastating condition known as rapid-onset obesity with hypoventilation, hypothalamic and autonomic dysregulation (ROHHAD) has also been associated with NETs (ROHHADNET) [81]. RO-HHAD has been reported exclusively in the pediatric population, often presents initially as rapid weight gain and can be quickly fatal due to the impairment of the central respiratory drive [81,82]. ROHHADNET patients typically present with tumors of neural crest origin, such as ganglioneuromas [81,83]. Future studies that examine the correlation between obesity and non-functional NENs may help to determine whether the relationship is causal rather than a reflection of the underlying metabolic changes induced by NENs. Table 1. Summary of the evidence linking obesity and metabolic syndrome to increased incidences of neuroendocrine tumors.
Obesity and Patient Outcomes in NENs
While the above studies support obesity as a risk factor for NENs oncogenesis, early evidence has actually pointed toward a protective effect of increased BMI for patients al-ready diagnosed with NENs (Table 2). In a recent analysis by our group of 1010 patients with NENs, a positive correlation was observed between survival outcomes and increasing BMI [86]. Indeed, the best outcomes were seen in the 30.6% patients categorized as obese (BMI of ≥30 kg/m 2 ), and an underweight BMI was associated with poorer survival (HR 1.74; 95% CI 1.11-2.73) [86]. The effect was also preserved when BMI was used as a continuous variable (HR with increasing BMI: 0.97; 95% CI: 0.95-0.98) [86]. The protective effect of obesity on survival was maintained in independent analyses of 611 patients with NETs and 399 patients with NECs [86]. An important limitation of this study is that only BMI at diagnosis was available as a metric. As BMI is subject to change, longitudinal measurements of BMI over disease course may provide more information about its impact on outcomes. In a survey of 355 NET patients, Pape et. al. [87] described that 36% of patients had already experienced weight loss at the time of diagnosis and cancer cachexia was a significant contributor to mortality. This is supportive of decreased BMI as a poor prognostic indicator for NETs in our study, highlighting the importance of the targeted management of cachexia to improve patient outcomes. Importantly, the prevalence of cancer cachexia in NENs is not well established, although it is thought to be moderate given the generally slow-growing nature of well-differentiated NENs and good overall outcomes [88,89].
Interestingly, the presence of MS was also previously observed to correlate with both low-grade and disseminated or metastatic GEP-NETs [62]. Nevertheless, a stratified analyses of patients with different tumor stages preserved the protective effect of increasing BMI in our study [86]. Overall, the important limitations of our study include the use of BMI (an imperfect correlate of nutritional status and visceral obesity), lack of longitudinal weight data and the unavailability of information on other prognostic factors such as performance status and received treatments [86].
Several additional studies have delved into the potentially protective effect of increased BMI on NEN outcomes. A study of 324 patients with pancreatic NETs confirmed that a BMI of <20 was a negative prognostic factor, although the effect was not preserved in a multivariate analysis [90]. In a different study that examined 128 non-functioning pancreatic NETs, a BMI of ≥25 was not associated with differences in metastases or overall survival, although a comparison was not made with a BMI of ≥30 group [91]. A focused analysis of 22,096 patients diagnosed with GEP-NETs within an inpatient setting demonstrated a decreased likelihood of inpatient mortality in obese patients (OR 0.6, multivariate p = 0.02) and an increased likelihood of inpatient mortality in patients suffering from malnutrition [92]. This study was only able to examine all-cause mortality, and it was further limited by the use of The International Classification of Diseases, Ninth Revision (ICD-9) codes for weight status rather than BMI or other biometric measurements [92]. While some existing evidence points toward improved short-and long-term survival in obese patients that are diagnosed with NENs, further analyses are necessary to define this relationship and identify the impact of obesity on the NENs of different tumor sites. Importantly, the impact of BMI and nutritional interventions on the survival of patients with NENs has not been evaluated in clinical trials or prospective studies, making it difficult to establish a direct role for obesity's effects on patient outcomes. A majority of the current literature has also focused on gastrointestinal NETs, although our analysis demonstrated that obesity is a protective factor for NENs with extra-gastrointestinal system primary sites and NECs, as well [86].
One potential explanation for the obesity paradox in NENs is the impact of BMI on the response to cancer treatment, with likely different impacts depending on the specific modality of therapy. In a study that examined 30 patients with metastatic NETs, improved survival in response to everolimus was observed in patients with higher SMIs and BMIs, although the comparison was only made between patients with a BMI of <18.49 and BMIs ranging from 18.49-24.99 [93]. This may be reflective of the poor outcomes of sarcopenic patients or possibly an effect of increased visceral adiposity on tumor responsiveness to mammalian target of rapamycin (mTOR) inhibition [93]. In a study of 67 patients with liver metastases undergoing chemoembolization, a linear relationship was also observed between BMI and tumor responsiveness [94]. Conversely, in a study of 19 patients with metastatic NEC that were receiving platinum-based chemotherapy, a BMI of ≥25 was actually associated with poorer survival outcome (PFS of 19.3 months and 6.2 months in the BMI < 25 and BMI ≥ 25 groups, respectively, with p = 0.006) [95]. Therefore, both positive and negative associations between BMI and treatment response have been observed in NENs, which is likely reflective of the complex interactions between tumor biology, the microenvironment and mechanisms of various treatments. Further studies in obesity and NEN outcomes should stratify patients based on the specific treatments received to clarify whether the relationship reflects the underlying biology of the disease or the interaction of specific treatments with the metabolic changes observed in obesity. Table 2. Summary of the evidence examining the link between BMI and metabolic syndrome with a prognosis of neuroendocrine tumors.
Citation Study Population Findings
Marrache (2007) Retrospective study of 128 patients with non-functioning pancreatic neuroendocrine tumors No association was seen between BMI and the risk of distant metastasis or death.
Glazer (2014) [92] Retrospective study of 22,096 patients discharged from hospital with abdominal neuroendocrine tumors Obesity was associated with decreased rates of inpatient mortality in patients with NET (OR 0.6, multivariate p = 0.02), and malnutrition was associated with higher rates of mortality (9% vs. 2%, multivariate p < 0.0005). The rate of inpatient hospital complications was similar between obese and non-obese patients, but it was increased in malnourished individuals (15% vs. 10%, p < 0.0005).
Bongiovanni (2015) [95] Retrospective study of 19 patients with metastatic gastroenteropancreatic neuroendocrine carcinoma treated with cisplatin or cisplatin/etoposide Patients with lower BMIs had better overall survival and progression-free survival than patients with BMIs of ≥25. The mOS in the lower BMI group was not reached. The BMI ≥ 25 group had an mOS of 11.7 months (95% CI 5.6-13.5, p = 0.029).
Abdel-Rahman (2022) [86] Retrospective study of 1010 patients with NENs of any primary site between 2004-2019, with complete BMI information Patients with obesity (a BMI of >30 kg/m 2 ) had the best survival outcomes, while underweight status was associated with poorer survival. These results were maintained on a stratified analysis by histology (NEC or NET), tumor stage, and primary site. The overall hazard ratios (OHR) were 0.60 (0.47-0.75) for obese individuals and 1.74 for underweight individuals (1.11-2.73).
Diabetes, Obesity and NENs
Multiple studies have pointed towards increased BMI as the single most important risk factor for the development of T2DM [3,85,[96][97][98]. It is therefore important to determine how diabetes and NENs interact independent of obesity. It is hypothesized that a high insulin state can contribute to cancer growth through its mitogenic properties [99]. Further adding to the complexity, certain types of functional NENs, such as glucagonoma and somatostatinoma, can induce hyperglycemia and are associated with higher rates of developing diabetes [100]. The use of therapies for NENs, such as somatostatin analogues (SSAs) and mTOR inhibitors, can also cause impaired glucose metabolism and insulin resistance [100]. Diabetes is an established risk factor for NENs, although most evidence suggests that patients with NENs and diabetes generally do not differ significantly in their outcomes in comparison to non-diabetic patients [100,101]. However, some studies have suggested that diabetes can modify the risk of metastases. Notably, in the case-control study of sporadic pancreatic endocrine tumors by Capurso et. al. [84], recent diabetes was an independent risk factor for tumor formation and correlated with a higher incidence of metastatic disease at the time of diagnosis. A correlation also exists between T2DM and an increased frequency of the pleural invasion of pulmonary carcinoids [102].
There has been some evidence that the use of metformin for glycemic control in diabetic patients improves survival in NEN patients. In an analysis of patients with advanced pancreatic NETs under treatment with everolimus and/or SSAs, the progression-free survival (PFS) of patients on metformin for diabetes was longer than both the diabetic patients not on metformin (PFS 44.2 months vs. 20.8 months, p < 0.0001) and the non-diabetic patients [103]. In a later post hoc analysis of the Controlled Study of Lanreotide Antiproliferative Response in Neuroendocrine Tumours (CLARINET) examining the use of Lanreotide in advanced non-functional enteropancreatic NETs with diabetes, diabetic patients receiving metformin had significantly longer progression-free survival rates in comparison to diabetic patients not receiving metformin (85.7 weeks versus 38.7 weeks, p = 0.009) [104]. On that basis, the METNET phase II clinical trial was specifically designed to test the role of metformin monotherapy in gastroenteropancreatic or pulmonary advanced/metastatic well-differentiated NETs [105]. This trial did not demonstrate a clinically significant antitumor effect, which led the authors to hypothesize that the beneficial effect of metformin is in its synergistic activity with everolimus to inhibit mTOR. Unfortunately, the clinical evidence for this is also controversial and entirely retrospective [103,105,106]. While the repurposing of a well-established medication such as metformin in cancer treatment is attractive, clearly, further studies are needed to determine whether it has a role in NEN treatment. Further examination of the correlation between obesity and NENs will need to carefully stratify patients based on history of diabetes and the use of metformin to remove possible confounding effects.
Nutrition, Obesity and NENs
A discussion of how obesity impacts NENs would not be complete without considering the significant contribution of overall nutritional status on disease course. The interpretation of how obesity impacts outcomes in NETs is confounded by the various ways in which NETs can impact gut function and metabolism by secreting hormones and small peptides similar to their normal cell counterparts, such as serotonin, gastrin, ghrelin, glucagon, somatostatin and insulin [107]. Therefore, NET patients may develop CS, diabetes, hypoglycemia or hypergastrinemia (e.g., Zollinger-Ellison syndrome) [108,109]. Examples of gastrointestinal complications with NETs include malabsorption, dysmotility, chronic diarrhea and steatorrhea [89,108,110]. These gastrointestinal complications in NETs may result from the location of the tumor within the digestive tract, secretion of hormones by functional tumors and side effects of cancer therapies [89]. Patients with NETs are also at risk for developing deficiencies of niacin, fat soluble vitamins and vitamin B12 [111][112][113][114]. These nutritional complications may be further exacerbated by the use of SSAs that inhibit the function of pancreatic enzymes and surgical treatment of NETs that alter the anatomy and function of the gastrointestinal tract [107,110,112].
The prevalence of suboptimal nutritional status among NEN patients has been demonstrated in several clinical studies. Nutritional status in cancer patients may be evaluated using screening tools such as the Subjective Global Assessment (SGA) and Nutritional Risk Screening (NRS) scores or with bioelectrical impedance analysis [88,[115][116][117]. The SGA scale classifies patients as well nourished (SGA A), moderately or suspected malnourished (SGA B) and severely malnourished (SGA C) [88]. In a cross-sectional analysis performed by Barrea et. al. of 83 patients with GEP-NET [71], patients with GEP-NETs had a worse metabolic profile than the average population, were less adherent to a Mediterranean diet and consumed greater amounts of simple carbohydrates and polyunsaturated fats. This led to increased waist circumference and higher blood pressure, high levels of fasting glucose, total and LDL cholesterol, and higher triglycerides and lower HDL cholesterol [71]. Patients with progressive or stage 4 disease also had a worse metabolic profiles compared to patients with earlier stages of the disease [71]. In an international survey of 1928 patients diagnosed with NETs, a significant number of patients reported gastrointestinal side effects related to their diagnosis (48% diarrhea, 41% abdominal cramping, 21% reflux, 21% weight loss, 19% steatorrhea and 15% weight gain) and 58% reported the need for dietary changes which may negatively impact their metabolic profile [108,118]. This highlights the importance of a multi-disciplinary approach to address the complex dietary needs of NET patients [108,118].
Several studies have been undertaken to evaluate how malnutrition impacts patient outcomes in NENs. In a study that evaluated the nutritional status of patients with NENs using the SGA and NRS, patients with more advanced disease (e.g., Grade 3 NEC, patients requiring treatment with chemotherapy and patients with progressive disease) displayed higher rates of malnutrition [88]. Furthermore, malnourished patients demonstrated poorer overall survival rates (mean overall survival (OS) of 31.17 months vs. 19.94 months between SGA A and SGA B or C, with p < 0.001), an effect that was preserved in a subgroup analysis of different primary tumor sites, disease staging and treatment status [88]. These findings are concordant with a study by Borre et. al. [119], which demonstrated that 38% of NET patients were at nutritional risk. Using the malnutrition universal screening tool (MUST), Qureshi et. al. [120] also demonstrated that 14% of outpatients with GEP-NETs were at nutritional risk. Another study of 325 pancreatic NET patients demonstrated that patients who were underweight, defined as having a BMI of <20 at the time of diagnosis, had poorer prognoses (HR 2.5, p = 0.006) [90]. Independent of the screening tool used, these studies altogether demonstrate that an estimated 25% of patients with NETs may be at risk of significant malnutrition and malnutrition is ultimately linked to poorer patient outcomes [88,119,120].
As a consequence of the unique ways that NENs and their treatments may interact with a patient's nutritional status, specific guidelines have been developed to optimize nutritional status in NENs [107,108]. Examples of important nutritional recommendations for NEN patients include supplementation with niacin, vitamin B12 and fat-soluble vitamins; screening for malnutrition; and dietary modifications when patients develop food intolerances [108,110]. In a study focused on vitamin D deficiency, Robbins et. al. [121] demonstrated that simple advice to increase vitamin D supplementation resulted in a significant improvement in vitamin D insufficiency (66% at baseline to 44.9% after 12 months), suggesting that nutritional interventions may not require significant healthcare expenditures. The targeted management of gastrointestinal side effects may also improve overall nutritional status. For example, the Telotristat Etiprate for Carcinoid Syndrome Therapy (TELECAST) phase 3 trial examined the use of Telotristat ethyl in addition to somatostatin analogues for patients who had diarrhea relating to CS, demonstrating a sustained reduction of bowel movement frequency and weight gain and improvements in nutritional status [122,123]. Future prospective studies will be necessary to optimize specific dietary interventions for NEN patients. Given the relatively good survival outcomes for patients diagnosed with NENs, the risk of developing long-term metabolic complications and cardiovascular disease should a NEN patient's poor nutritional status not be addressed is a strong consideration, as well.
Conclusions
Obesity is a significant burden on human health, with established risks of developing multiple cancer types, although there has been evidence in support of an "obesity paradox" whereby increased BMI is thought to be a good prognostic indicator for certain cancers.
In this review article, we focused on the impact of obesity on the risk and prognosis of NENs. Epidemiological studies have demonstrated a higher incidence of NENs in individuals with obesity and MS, although obesity seems to be a good prognostic indicator for patients with NENs based on the currently available retrospective studies. Certainly, additional research is necessary to define the impact of obesity and MS on outcomes for NEN patients. In our review of the evidence, the following key areas will require further investigations:
1.
Future studies should be designed to deconvolute the individual contribution of visceral adiposity as an independent prognostic factor for NENs while controlling for individual differences in metabolic profile, diabetes, nutritional status and diet, and the co-existence of cachexia and cancer treatment.
2.
Future studies are necessary to evaluate whether the impact of obesity on the prognosis of NENs can be extrapolated to tumors outside of the gastrointestinal system, as well as to neuroendocrine carcinoma.
3.
A prospective study examining specific nutritional interventions and their impact on both survival and patient-reported outcomes will serve to evaluate their impacts and define treatment protocols. Such studies will need to control for known factors in the risk and prognosis of NENs. A standardized tool for malnutrition should be performed before and after the intervention. 4.
The measure of both BMI and more advanced metrics of visceral obesity should be compared to determine the validity of BMI as a metric in similar studies of obesity and cancer outcomes.
Given the relative paucity of data in support of the obesity paradox in NENs, the known nutritional complications relating to NENs and the overall negative health outcomes associated with obesity, it would be premature to make a recommendation targeting a specific BMI for patients with NENs. However, a review of the current literature highlights the importance of weight and nutritional assessment as regular components of the evaluation of patients with NENs, as well as a multi-disciplinary approach to the management of GI side effects, weight loss, malnutrition and cancer cachexia. | 7,595 | 2022-11-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Fuzzy Logic in Business, Management and Accounting
The aim of the paper is to show the implementation of fuzzy logic in business, administration and accounting, through the research published in Scopus. The results of the document focus on the following sections: 1) Fuzzy set theory, in business, administration and accounting. 2) Analysis of fuzzy logic bibliometrics in business, administration and accounting. 3) Identification and characterization of the documents or seminal documents most cited in applications of fuzzy logic in business, administration and accounting. The method used in this contribution is documentary research using the Scopus database and the VOSviewer science bibliometric analysis and mapping tool. In the future, this computational practice will focus on new diffuse models and the combination of these with other artificial intelligence techniques such as neural networks, genetic algorism, forage bacteria, among others.
Introduction
The fuzzy logic appears for the first time in the mid-60s of the twentieth century and since then, the theoretical contributions and the development of its applications have continued to be considered today one of the most used artificial intelligence techniques (Díaz Córdova, Coba Molina, & Navarrete, 2017;Kokles, Filanová, & Korček, 2016;Ouahli & Cherkaoui, 2019). To a certain extent, the diffuse logic is based on the observation of human behavior, where despite the knowledge of man is faced with many imperfect situations and has certain uncertainties and inaccuracies, his decisions are correct (Díaz Córdova et al., 2017;Dostál & Lin, 2018;Dostál, Rukovanský, & Králik, 2018;Kokles et al., 2016;Ouahli & Cherkaoui, 2019;Pislaru, Alexa, & Avasilcăi, 2018;Plessis, Martin, Roman, & Slabbert, 2018); that is to say: the solution of complex problems solves them with the help of approximate data which indicates that the precision is often unnecessary. In this way, any activity does not require an exact and rigorous mathematical model, as it has not been for driving a vehicle or deciding when a bank grants a loan to a client, although it is known that these are complex issues that require above all skills and knowledge. In addition to being acquired through self-effort, they are obtained through experience (Díaz Córdova et al., 2017;Dostál & Lin, 2018;Grekousis, Manetos, & Photis, 2013;Kokles et al., 2016;Morano, Locurcio, Tajani, & Guarini, 2015;Osiro, Lima-Junior, & Carpinetti, 2014;Ouahli & Cherkaoui, 2019;Pislaru et al., 2018;Plessis et al., 2018). Fuzzy logic deals with the usefulness of imprecision and the relative importance of precision (Bolloju, 1996;Chakraborty, Ravi, Shivangi, Vanshika, & Vishal, 2019;Halabi & Shaout, 2019;Levy, Mallach, & Duchessi, 1991).
From this premise, it can be inferred that the diffuse logic to provide a solution needs a human know-how. Diffuse logic can consider variables of a qualitative nature that are hardly recognized by other techniques and provides an effective approach by systematizing the empirical terrain and transcribing and giving dynamism to experts' knowledge Sharma & Saxena, 2017).
This universal aspect of fuzzy logic allows to apply it in business decision making, negotiation management and commerce processes, based mainly on the experience gained in the management of processes where classical systems have limited behavior (Almutairi, Salonitis, & Al-Ashaab, 2019;Chao & Liaw, 2019).
Fuzzy logic is currently used in a large number of processes, such as financial analysis software, control of energy management systems, validation of electoral processes, medical instruments and in many other applications (Al Nahyan, Hawas, Aljassmi, & Maraqa, 2018;Djekic, Smigic, Glavan, Miocinovic, & Tomasevic, 2018;Sivamani, Kim, Park, & Cho, 2017). The advantage of the combination of artificial intelligence techniques such as fuzzy logic, neural networks, bacteria fodder and genetic algorithms is that they provide effective solutions in a large number of applications and with a low cost of time and resources (Knight & Fayek, 2002;Kunsch & Vander Straeten, 2015;Kushwaha & Suryakant, 2014;Mujahid & Duffuaa, 2007). Artificial intelligence tools such as Fuzzy Logic have been successfully used in energy and resource management (Bravo Hidalgo, 2015;Bravo Hidalgo & León González, 2018;Hidalgo & Guerra, 2016).
The fundamental advantages of fuzzy logic are: 1) It allows to formalize and simulate the report of an expert in the conduction and normalization of a process.
2) It provides a simple answer to the difficult modeling procedures. 3) It takes into consideration several variables and its weighted merger determines the magnitude of influence. 4) It continuously considers cases or exceptions of a different nature, integrating them into the solution. 5) They allow the imple-A. B. Hernández, D. B. Hidalgo Open Journal of Business and Management mentation of multicriteria strategies incorporating the knowledge of the experts.
The objective of this contribution is to show the implementation of fuzzy logic in business, management and accounting, through the research published in Scopus. This document has among its results the following sections: 1) Theory of fuzzy sets, in Business, Management and Accounting. 2) Analysis of the bibliometric of Fuzzy Logic in Business, Management and Accounting. 3) Identification and characterization of the most cited documents or seminal papers in fuzzy logic application in business, management and accounting.
Material and Method
The material used in this contribution is the documentation and bibliometric analysis tools contained in the Scopus academic research directory and the use of the VOSviewer software; the method, a bibliographic review and critical analysis of the results of the detected contributions.
Using the phrase "Fuzzy Logic" in the title of the contributions contained in the Scopus directory; and limiting these results to the subject area "Business, Management and Accounting", 397 documents were detected between 1983 and 2019. Using the bibliometric analysis tools that Scopus provides to its subscribers, Figure 2, Figure 4 and Figure 5 were achieved; In addition, the cited index of each of the investigations referred to in Table 1 and Table 2 was determined.
The bibliometric information of the 397 documents mentioned above was exported from the Scopus directory in two different formats. First, it was exported in (.ris) format, to be processed by the EndNote bibliometric management software. Through this computational tool all the citations contained in this work were generated. This bibliographic management tool allows to make citations by the norm selected by the user, for this work the bibliographic citation norm used was the American Psychological Association (APA). The use of this tool guarantees that each citation has in its reference the organization concerning the standard used and that, based on the integrity of the exported data, the reference has the greatest number of gifted fields.
Then the bibliometric information of the 397 documents was exported from the mentioned academic directory, but this time in (.csv) format. This process allowed us to use this information in the VOSviewer science bibliometric analysis software. The use of this software is justified by its potential in the process of visualization of productivity and orientation of scientific activity. By means of this tool, the density map of terms was obtained, made of text mining on the keywords of the articles detected.
Theory of Fuzzy Sets, in Business, Management and Accounting
In classical theory an element belongs or does not belong to a set. This strict and determinant notion does not allow taking into account other frequently encoun- The notion of fuzzy set is based on the concept of partial belonging where each element belongs partially or gradually to a set. A classic and simple example; suppose that a man is considered tall when his height is greater than 1.80 m and small when he is less than 1.40 m. Where to include a man whose height is 1.78 m? Obviously, that man intuitively has a higher degree of belonging to the group of tall men than to the group of small men. If it is treated classically, the degree of belonging to the group of tall men is zero or zero because it does not reach height and therefore should be considered a short man with full membership in that group.
In general, a diffuse set Q of a universe H is defined as: which indicates that Q is a set of pairs Example of the fuzzy set Q of the numbers close to 1. For this case it is assumed that the degree of belonging to the set is given by:
Analysis of the Bibliometric of Fuzzy LOGIC in Business, Management and Accounting
Research in the application of fuzzy logic in business, management and accounting began at the beginning of the 80s. But they will be important in the scientific community at the beginning of the 21st century. This is mainly due to the increase in market dynamics and business management (Brace, Gatarek, & Musiela, 1997;Hsieh, 1991;Onnela, Chakraborti, Kaski, Kertész, & Kanto, 2003). Figure
Identification and Characterization of the Most Cited Documents or Seminal Papers in Application of Fuzzy LOGIC in Business, Management and Accounting
To date there are only 10 revision-type papers in Scopus. The last review paper related to this topic was published in 2016. Table 1 identifies each of these documents. The researchers Chao and Skibniewski (1998) In the paper of the authors Barclay and York (2003) Policy capturing was used to determine cue weights when a merit raise committee implemented an imprecise directive. The committee was consistent in their evaluations, but the policy was similar to that obtained by counting activities in faculty annual reports.
Evaluations by three raters of 36 faculty were regressed on actual raises. This study has implications for organizations that motivate employees through merit pay decisions in ways that are inconsistent with their mission and business objectives. Collan and Liu (2003) they expressed that: To proceed in an efficient and precise way in the dynamic management of large investments, managers must have constant access to information on the real-time situation of the investment, as well as access to up-to-date information on changes in the business environment. In other words, the economic existence of large investment processes requires a permanent dynamic management, currently. This contribution studies how emerging software technologies will help provide better support in this scenario. In addition, they will provide a support system that will make an inte-
Discussion
Regularly, various methods are used for decision making in business, administration and accounting processes: classics and methods that use soft computing non-linearity. Fuzzy logic differs from conventional computing in that it tolerates inaccuracy, uncertainty and approximation (Sivamani et al., 2017). In effect, the model to follow for fuzzy logic is the human mind (Somasundaram & Genish, 2014). The guiding principle of fuzzy logic is to exploit this tolerance to achieve management capacity, robustness and low cost of solution. Table 2 Flexible computing symbolizes a paradigm shift in computing design, a change that reflects the fact that the human mind, unlike today's computers, possesses an important capacity to store and process information that is inaccurate and approximate (Sheeba & Vivekanandan, 2016;Yager & Zadeh, 2012).
Fuzzy logic in business, management and accounting applications have specific characteristics. They can help in the decentralization of decision-making processes so that they are standardized, reproducible and documented. These methods play very important roles in companies because they help reduce costs, and that can generate higher profits; they can also help companies compete successfully and decrease expenses. Purpose: This paper aims to provide a tool for decision makers to help them with selection of the appropriate supplier. Design/methodology/approach: Companies often depend on their suppliers to meet customers' demands. Thus, the key to the success of these companies is selection of the appropriate supplier. A methodology is proposed to address this issue by first identifying the appropriate selection criteria and then developing a mechanism for their inclusion and measurement in the evaluation process. Such an evaluation process requires decision maker's preferences on the importance of these criteria as inputs. Findings: Human assessments contain some degree of subjectivity that often cannot be expressed in pure numeric scales and requires linguistic expressions. To capture this subjectivity the authors have applied fuzzy logic that allows the decision makers to express their preferences/opinions in linguistic terms. Decision maker's preferences on appropriate criteria as well as his/her perception of the supplier performance with respect to these criteria are elicited. Fuzzy membership functions are used to convert these preferences expressed in linguistic terms into fuzzy numbers. Fuzzy mathematical operators are then applied to determine a fuzzy score for each supplier. These fuzzy scores are in turn translated into crisp scores to allow the ranking of the suppliers. The proposed methodology is multidisciplinary across several diverse disciplines like mathematics, psychology, and operations management. Practical implications: The procedure proposed here can help companies to identify the best supplier. Originality/value: The paper describes a decision model that incorporates decision maker's subjective assessments and applies fuzzy arithmetic operators to manipulate and quantify these assessments. (Ordoobadi, 2009) Competitive advantage is often determined by the effectiveness of an organization's supply chain, and as a result, the evaluation and selection of suppliers has become an increasingly important management activity. But the evaluation process is complex. The data that must be considered are both technical and social/organizational. Much of the data are difficult to obtain and ambiguous or vague to interpret. In addition, the dynamic global environment of changing exchange rates, economic conditions, and technical infrastructure, demand that the pool of potential suppliers be re-evaluated periodically. Nonetheless, a rational process of evaluation must exist to select the most appropriate suppliers. This paper addresses one dimension of the evaluation process, the information sharing capability of potential supply chain partners. It is an especially important dimension since information technology is necessary to horizontally integrate geographically dispersed operations. Fuzzy logic, a subset of artificial intelligence, together with analytical hierarchy process is used to model this process and rank potential suppliers. It is an appropriate methodology to use for this application and has the potential to be used with other supply chain design decisions since it explicitly handles vague, ambiguous, and imprecise data. (Shore & Venkatachalam, 2003) Manufacturing decisions inherently face uncertainties and imprecision. Fuzzy logic, and tools based on fuzzy logic, allow for the inclusion of uncertainties and imperfect information in decision making models, making them well suited for manufacturing decisions.
In this study, we first review the progression in the use of fuzzy tools in tackling different manufacturing issues during the past two decades. We then apply fuzzy linear programming to a less emphasized, but important issue in manufacturing, namely that of product mix prioritization. The proposed algorithm, based on linear programming with fuzzy constraints and integer variables, provides several advantages to existing algorithm as it carries increased ease in understanding, in use, and provides flexibility in its application. We present an interactive user-friendly microcomputer-based decision support system for consensus reaching processes. The point of departure is a group of individuals (experts, decision makers …) who present their testimonies (opinions) in the form of individual fuzzy preference relations. Initially, these opinions are usually quite different, i.e., the group is far from consensus. Then, in a multistage session a moderator, who is supervising the session, tries to make the individuals change their testimonies by, e.g., rational argument, bargaining, etc. to eventually get closer to consensus. For gauging and monitoring the process a new "soft" degree (measure) of consensus is used whose essence is the determination to what degree, e.g., "most of the individuals agree as to almost all of the relevant options". A fuzzy-logic-based calculus of linguistically quantified propositions is employed. (Fedrizzi, Kacprzyk, & Zadrozny, 1988) Open Journal of Business and Management When making decisions we need to consider the possible alternatives and then choose the optimal alternative. The uncertainty of subjective judgment is present during this selection process. Also, decision making becomes difficult when the available information is incomplete or imprecise. This kind of problem exists while selecting a project. There are also several critical factors that are involved in the selection process, including market conditions, availability of raw materials, etc. The decision mechanism is constrained by the uncertainty inherent in the determination of the relative importance of each attribute element. In this paper, we develop a system for the project selection using fuzzy logic. Fuzzy logic enables us to emulate the human reasoning process and make decisions based on vague or imprecise data. Our approach is based on uncertainty reduction. The optimal alternative is formed by the relative weights of each attribute's elements combined over all the attribute membership functions. We also do a case study for the selection of software packages. Our system could be easily applied to other project selection problems under uncertainty. (Machacha & Bhattacharya, 2000) 10 Fuzzy logic-based leanness assessment and its decision support system Vinodh, S. Balaji, S. R. 2011 70 International
Journal of Production Research
Manufacturing organizations have been witnessing a transition from mass manufacturing to lean manufacturing. Lean manufacturing is focused on the elimination of obvious wastes occurring in the manufacturing process, thereby enabling cost reduction. The quantification of leanness is one of the contemporary research agendas of lean manufacturing. This paper reports a study which is carried out to assess the leanness level of a manufacturing organization. During this research study, a leanness measurement model has been designed. Then the leanness index has been computed. Since the manual computation is time consuming and error-prone, a computerized decision support system has been developed. This decision support system has been designated as FLBLA-DSS (decision support system for fuzzy logic-based leanness assessment). FLBLA-DSS computes the fuzzy leanness index, Euclidean distance and identifies the weaker areas which need improvement. The developed DSS has been test implemented in an Indian modular switch manufacturing organization. (Vinodh & Balaji, 2011) Fuzzy logic models are not a black box; because the rules are defined. The advantages of fuzzy logic can be found in the configuration of rules of phenomena and processes of great complexity and their ability to adjust and simulate before implementation. Neuro-fuzzy models could be an advantage in the configuration of rules (Lin & Lee, 1991;Zadeh, 1994).
In the future it is expected that this computational practice will focus fundamentally on new diffuse models and the combination of these with other artifi-A. B. Hernández, D. B. Hidalgo Open Journal of Business and Management cial intelligence techniques such as neural networks, genetic algorism, forage bacteria, among others. Future scientific contributions will focus on various applications to allow business decision making to be faster and more precise, this will be the main research focus because the amount of data to be processed increases exponentially in business, management and accounting. More and more decisions will be made by automatic systems without the timely influence of the analyst (Nasiri & Darestani, 2016;Singhal, Ranganth, Batra, & Nanda, 2016).
The evolution of rapid, more precise, partially or fully automated decision-making systems is where flexible computing methods will be used. They will save time, reduce errors, prevent human failures and reduce costs, thus, favoring the competitiveness of companies (Dostál & Kruljacová, 2019;Geramian et al., 2019;Plessis et al., 2018).
The contributions of fuzzy logic in business, management and accounting have advantageous consequences: in many cases, a problem can be solved effectively and agile using this mathematical method Zohuri & Moghaddam, 2017). The rapid growth in the number and variety of applications of fuzzy logic methods, together with the growing interest of the international scientific community, demonstrates the value of this practice, and suggest that its impact will increasingly be felt in the coming years in the world of business, management and accounting.
Limitations of the Study
• This research is limited to the analysis of documents published in the academic directory Scopus.
• Only paper written in English is studied.
• Only analyze the scientific publications between the years 1983 and 2019. • The number of citations per investigation, shown in Table 1 only refers to citations made by research published in Scopus.
• The analysis of text mining to make the map of terms of Figure 3 is only done in the keywords of the articles detected.
Conclusion
Fuzzy logic is a theory that uses fuzzy sets and very precise rules. This mathematical method uses linguistic variables, the base of rules or fuzzy sets are easily modified, the input and output are related in linguistic terms, they are easily understood and some rules cover a great complexity. The applications of fuzzy logic for the solution of problems in the field of denial, administration and accounting activity have had a notable increase in the international scientific community, in recent decades. This condition will continue in the future given the dynamism and the large amount of information that is currently handled in the areas of economic and business sciences. On the other hand, the application of fuzzy logic in this field finds its greatest utility, in models or algorithms for decision making. In the future, it is expected that this computational practice will focus fundamentally on new diffuse models and the combination of these with other artificial intelligence techniques such as neural networks, genetic algorism, forage bacteria, among others. Future scientific contributions will focus on various applications to allow business decision making to be faster and more precise. This will be the main research focus, because the amount of data to be processed increases exponentially in the subject area: business, management and accounting. | 4,887.4 | 2020-10-13T00:00:00.000 | [
"Business",
"Computer Science",
"Mathematics"
] |
A Comparative Study of 3D UE Positioning in 5G New Radio with a Single Station
The 5G network is considered as the essential underpinning infrastructure of manned and unmanned autonomous machines, such as drones and vehicles. Besides aiming to achieve reliable and low-latency wireless connectivity, positioning is another function provided by the 5G network to support the autonomous machines as the coexistence with the Global Navigation Satellite System (GNSS) is typically supported on smart 5G devices. This paper is a pilot study of using 5G uplink physical layer channel sounding reference signals (SRSs) for 3D user equipment (UE) positioning. The 3D positioning capability is backed by the uniform rectangular array (URA) on the base station and by the multiple subcarrier nature of the SRS. In this work, the subspace-based joint angle-time estimation and statistics-based expectation-maximization (EM) algorithms are investigated with the 3D signal manifold to prove the feasibility of using SRSs for 3D positioning. The positioning performance of both algorithms is evaluated by estimation of the root mean squared error (RMSE) versus the varying signal-to-noise-ratio (SNR), the bandwidth, the antenna array configuration, and multipath scenarios. The simulation results show that the uplink SRS works well for 3D UE positioning with a single base station, by providing a flexible resolution and accuracy for diverse application scenarios with the support of the phased array and signal estimation algorithms at the base station.
Introduction
Accurate and robust positioning is becoming a core requirement for future autonomous vehicles. Location information will serve not only for seamless tracking of the autonomous vehicle in order to allow remote control and to avoid collisions, but also as an enabler for situation and context awareness [1], for improved communication functions [2,3], for optimized path planning [4], and for location-based authentication and enhanced security of communications [5].
Though any Global Navigation Satellite System (GNSS) is currently able to provide centimeter-level accuracy outdoors (e.g., with the help of professional multi-frequency GNSS receivers), it is well known that GNSS suffers from interferences, multipath, and a low signal-to-noise ratio in dense urban environments, where many of the future autonomous vehicles will be deployed (e.g., industrial drones, autonomous robots helping people who are blind, etc.). Complementary solutions to GNSS are necessary, and there are currently two options on the table, namely non-cellular systems (i.e., WiFi, Bluetooth Low Energy (BLE), Ultra Wide-Band (UWB), and Long-Range wireless networks (LoRa)) and cellular systems (i.e., GSM, 3G, LTE, and the emerging 5G systems). One obvious advantage of cellular-based localization techniques over the non-cellular ones is less interference in their frequency bands, as they typically use the licensed spectrum, unlike the non-cellular solutions typically operating in the unlicensed Industrial, Scientific, and Medical (ISM) • Minimum additional signaling and infrastructure: In this work, the positioning information is extracted by the propagation process estimation of the SRS on the receiver side. This means that the method can be applied to existing systems without additional signaling, protocol, or hardware/infrastructure modifications. • Free from the multiple sites synchronization: Our work is different from TDoA-based multilateration or hyperbolic approaches that require costly timing synchronization between distributed sites and the positioning function. The joint azimuth, elevation, and delay estimation methods are used in this work. • High capacity: Our work benefits from the orthogonality in the time and frequency domains of SRSs from the different UEs. The algorithms can be applied to each individ-ual UE for positioning estimation without interference from other UEs. Theoretically, the positioning capacity equals the number of Zadoff-Chu sequences used for UEs. • Flexibility: The position estimation algorithms of this work can easily adapt to the different subcarrier spacing and channel bandwidth combinations in the 5G NR for diverse positioning accuracy levels required by different application scenarios.
The rest of this paper is organized as follows: In Section 2, the system and signal models are describe to facilitate the discussion of the 3D positioning. Then, the subspacebased joint angle-delay estimation and EM-based signal clustering are introduced in Section 3. Section 4 shows the performances of both algorithms in different scenarios. The conclusion is given in Section 5.
Hypotheses
In this paper, the main scope is to investigate the performance of 3D UE positioning rather than self-localization or navigation. The considered system sketch can seen in Figure 1a. To shed light on the positioning methods and performance metrics in the later sections, we list below the hypotheses and constraints at the beginning of the discussion:
•
Frequency band: This work focuses on the sub-6 GHz band (e.g., 3.5 GHz) of the 5G NR carrier bands. Compared with the mmWave band, the sub-6GHz signal has less propagation loss and larger outdoor coverage, which is more suitable for manned or unmanned drones or vehicles. • Receiving antenna: In order to obtain the 3D position of a signal source without trilateration or hyperbolic positioning, the positioning station (i.e., the base station) needs to be able to measure the azimuth, elevation angles, and the time delay simultaneously. Therefore, a uniform rectangular array (URA) is utilized at the receiver end to be capable of spanning the whole azimuth and elevation dimension. The modeling of URA is introduced in Section 2.3. • Signal: The uplink 5G NR sounding reference signal (NR-SRS) sent by the UE is applied. The NR-SRS is an OFDM modulated Zadoff-Chu sequence that is feasible for time delay estimation. The introduction and modeling of SRS are given in Section 2.2. • Algorithm: The extended subspace method and expectation-maximization (EM)based algorithms are investigated for 3D positioning. The positioning algorithms are given in Section 3. • Positioning host: The URA and positioning algorithms are hosted at the 5G base station (called gNB in 5G terminology), so as to leverage the computing power and energy supply to accommodate the large-scale URA in the sub-6 GHz band and run the positioning algorithms.
Sounding Reference Signal in 5G NR
In 5G NR, the SRS is transmitted by the UE for uplink channel sounding, which includes the channel estimation (in the frequency domain) and synchronization. As defined in 3GPP TS 38.211 [16], an NR-SRS is an uplink orthogonal frequency division multiplexing (OFDM) signal filed with a Zadoff-Chu sequence on different subcarriers. For the purposes of communications, the SRS is used for closed-loop spatial multiplexing, uplink transmitting timing control, and reciprocity multi-user downlink precoding. To utilize the channel sounding function, the SRS must be known by both the UE (mobile transmitter) and gNB (base station receiver). With this prior knowledge known at the receiver, the SRS will be used to estimate the angle of the signal source and propagation delay by processing the received OFDM signal on an antenna array. In 5G NR, the SRS is transmitted as OFDM symbols, which are allocated in specified frequency (subcarrier) and time (slot) positions in 5G NR subframes. The generation of the SRS in 5G NR frames includes two steps: (i) Zadoff-Chu sequence r zc ∈ C M rs sc,b ×1 generation (described in Section 2.2.1); (ii) mapping r zc to s srs ∈ C W×1 as an OFDM symbol (described in Section 2.2.2). Figure 2
Zadoff-Chu Sequence Generation
Let us assume that a s srs ∈ C W×1 is expected for a single-antenna UE. We start with the generation of Zadoff-Chu sequence r zc ∈ C M rs sc,b ×1 from Chapter 6.4.1.4.2 in the 3GPP standard TS 38.211 [16]. The length of r zc is M rs sc,b , which is shorter than OFDM symbol W. r srs is a variant of one of 30 base sequences r. We use u to indicate the base sequence index and v to denote different variants. The Zadoff-Chu sequence r srs used for the 5G SRS can be obtained by using Equation (1): where l is the location index of the SRS OFDM symbol in a 5G subframe. l determines the variant index v (the detailed relation can be found in Appendix A.1). l can be chosen from zero to (N SRS symb − 1). N SRS symb is received from the radio resource control (RRC) layer message, which indicates the maximum number of OFDM symbols that can be used for SRS transmissions in a 5G subframe. The length of r srs is defined by the equation below: where N RB sc is the number of subcarriers allocated for the Zadoff-Chu sequence element in each resource block (RB) and equal to 12 in this work, according to [10,16]. The allocated RB value of m SRS,b is chosen from a 64 × 4 SRS bandwidth configuration table defined in 6.4.1.4.3-1 [16], which is indexed by bandwidth configuration parameter C SRS and SRS transmission bandwidth indicator B SRS , respectively. The value of structure controller parameter K TC can be chosen among 1, 2, and 4, and (K TC − 1) indicates the number of empty subcarriers between two Zadoff-Chu elements in an SRS OFDM symbol.
Resource Mapping
To transmit the SRS in the 5G NR frames, the generated Zadoff-Chu sequence r zc (n, l ) in Section 2.2.1 is mapped to the given physical resources (which include subcarriers and time slots). The mapping can be described by Equation (3), which is defined in Chapter 6.4.1.4.3 of the 3GPP TS 38.211 specifications [16].
where subscripts (K TC k + k 0 ), (l + l 0 ) denote the subcarrier and time slot indices, respectively. k 0 and l 0 are the starting subcarrier index and starting time slot index. (K TC k + k 0 ) and l are the shift from the starting position in the frequency domain and the time domain, respectively. N SRS symb and M RS sc,b are explained in Section 2.2.1. The parameter β SRS is the power constraint of the SRS specified in [10]. It ensures that the total uplink power of UEs is controlled under the same standards. The mapping rules are described in Appendix A.2.
After the resource mapping, the the original Zadoff-Chu sequence r zc ∈ C M Rs sc,b ×1 is arranged into specific subcarrier and time slots to form the SRS OFDM symbol s srs ∈ C W×1 (s srs is the transpose of one column of S K TC k +k 0 , l +l 0 ). s srs is then modulated onto the OFDM subcarriers: where the operator (·) is the point product and f w = (w − 1) · ∆ f (∆ f is the OFDM subcarrier spacing).
Propagation Delay Impact on SRS OFDM Symbols
The SRS OFDM symbol s is transmitted from the UE to gNB. Assuming the propagation causes the delay τ of the signal, the delay τ will introduce the different phase shift γ w = e −j2π f w τ on the w th subcarrier. Thus, the time delayed version s can be written as We define the delay manifold as: Zadoff-Chu sequences and resource mapping rules are designated by gNB to the UE. In the 5G NR system, both the transmitter (UE) and receiver (gNB) have prior knowledge of the SRS OFDM symbol. The manifold g(τ) can be used to estimate the propagation delay between the UE and gNB, which is described in Sections 3.1 and 3.2.
Uniform Rectangular Array Signal Model
The URA with M × N elements is illustrated in Figure 1b. M and N denote the number of elements on the x-axis and z-axis, respectively. The antenna elements are halfwavelength spaced horizontally and vertically. We use symbols θ and φ to denote the azimuth and elevation angle, respectively, and use the element at the original point as the reference element. We use an incident signal of a single source k from direction (θ k , φ k ). The array manifold vector can be written as: where the operator ⊗ is the Kronecker product. a z (θ k ) and a e (φ k ) are the azimuth and elevation manifolds, respectively, defined as: where λ is the wavelength and d is the array element spacing, which is the half wavelength in this work. For a single snapshot (sample), the congregate receiving signal from K sources can be written as: and n ∈ C MN×1 are one snapshot (sample) of the array receiving signal, incident signal, and noise, respectively.
Signal Model for 3D Positioning
In Equation (8), the signal x is only one snapshot from the K signal sources. Furthermore, the receiving signal y only contains the azimuth and elevation angles. In order to integrate the delay information contained in the SRS for 3D positioning (Section 2.2.3), the x ∈ C K×1 in Equation (8) is replaced by the multipath propagated SRS S(τ) ∈ C K×W . W is the length of the SRS. S = [s 1 , s 2 , ..., s k ] T . Then, Equation (8) can be rewritten as: where Y ∈ C MN×W , and N ∈ C MN×W are the receiving SRS and noise, respectively, τ = [τ 1 , τ 2 , ..., τ k ]. Y contains the azimuth, elevation angles, and delay information. To facilitate the 3D positioning, we further define the 3D manifold vector: where u(θ, φ, τ) ∈ C MNL×1 is the 3D manifold vector. Similarly, the receiving signal Y is vectorized to Y = vec(Y).
Subspace-Based Approach
The subspace-based signal classification is a widely used approach for angle estimation of the signal source. Most of the works can be traced back to the multiple signal classification (MUSIC) algorithm [17,18]. a similar approach is also applied on the multiple carrier signal for signal delay measurement [19]. The subspace approach is also used for joint 2D delay-angle estimation of the radio source be using the spatial-time manifold [20,21]. In this paper, we extend the manifold to the 3D spatial-time searching space u(θ, φ, τ), as shown in Equation (10). To performance the 3D position estimation, we first calculate the auto-correlation matrix: where E[·] denotes expectation and Y * is the conjugate transpose of Y. We take the eigenvalue decomposition of R Y Y to obtain the eigenvalue vector λ λ λ = [λ 1 , λ 2 , ..., λ MNL ] and eigenvector matrix e = [e 1 , e 2 , ..., e MNL ]. The eigenvalues λ i in λ λ λ are in ascending order, and e i corresponds to λ i . Assume there are K sources. We can define the noise subspace as Equation (12): With the noise subspace, the 3D angle-time spectrum can be defined as: The azimuth angle θ, elevation angle φ, and time delay τ that are associated with the peak values in the 3D space P(θ, φ, τ) determine the estimated signal source position.
Statistics-Based Approach
Another method used in this work is space-alternating generalized expectationmaximization (SAGE) [22,23], which is based on the EM [24] algorithm. The EM algorithm is used to estimate latent states or parameters when parts of the observations are missing or censored. In general, it is an integration process that contains an expectation step (E-step) and a maximization step (M-step). In the context of the statistics approach, we can define the vector η k = [θ k , φ k , τ k ] to indicate the position of the k th source. In the E-step, the estimate of the k th sourcex k (t;η) of the current iteration can be written as: where ζ k (t;η) = a(θ k , φ k )s k is the assumed receiving signal from the k th source on the URA without noise, β k are non-negative parameters, where a(θ, φ) ∈ C 1×MN is the URA manifold and T is the observing duration, which is selected to cover the sequence length and maximum propagation delay.
Simulation Setting
The positioning performance evaluation was carried out using simulations and the system parameters. The SRS configuration and positioning algorithm parameters are listed in Tables 1-3.
Carrier frequency
The n78 3.5 GHz in Frequency Range 1 (FR1, sub-6 GHz) frequency band, the most common 5G band in deployed networks. Furthermore, this band is more suitable for the outdoor scenarios concerned in this work.
Receiving array
Square shaped, half wavelength spacing URA is used. The 4 × 4, 8 × 8, and 16 × 16 configurations are used to test the array scale impact on the angle estimation performance.
Bandwidth
Though the supported channel bandwidth of n78 band is from 10 to 100 MHz, different values are used in this paper to test the bandwidth impact on the delay estimation performance.
Sub-carrier interval
The default subcarrier spaces 15 kHz, 30 kHz, 60 kHz, and 120 kHz are used to test the bandwidth impact on the positioning performance.
Modulation
The SRS is an OFDM signal. However, it is not a data payload. Thus, the symbols allocated to OFDM subcarriers are not Zadoff-Chu code elements, which do not have any modulation.
Channel model
The additive white Gaussian noise (AWGN) complex channel is used.
Duplex mode
Time-division duplexing (TDD) is the most used duplex mode in the n78 band. The positioning methods in this work are neutral to the duplex mode.
C srs 42
The total bandwidth configuration index to the maximum available RBs can be used.
B srs 2
The transmission bandwidth selecting index, used with C srs for selecting the RBs for the SRS.
The comb structure, which indicates the number of subcarrier gaps between two SRS subcarriers. The SRS subcarrier starting position. Frequency hopping and sequence hopping are disabled in the simulations. The subcarrier mapping starts from the first subcarrier. Observation time set as 20 ms to enhance the performance and that equals 20 time slots when the subcarrier spacing is 15 kHz.
Sampling frequency 60MHz
Assuming a 60MHz bandwidth. Sixteen RBs are allocated to the UEs, and the sampling frequency can be selected as 60Mhz.
Example SRS and Positioning Result
In this section, the example SRS used in the simulation in Section 4.2 for the positioning of three different UEs is shown in Figure 4a,b.
Performance Comparison of Positioning accuracy in Different Contexts
In this section, we use the RMSE as the metric for the estimation accuracy of the EM and subspace algorithms. The varying parameters in the following subsections are the SNR, antenna scale, and subcarrier space. Due to the fact that SRSs from different UEs are orthogonal in the time, frequency, and codes domain, the positioning of multiple UEs can be directly decomposed into single target positioning. Therefore, the simulation results shown in this section were executed as single target detection.
Single Target Estimation Performance
Firstly, Figure 5 is given to show how the level of SNR affects the estimation accuracy. The target is 3 m away from the base station, and the SRS is from the direction of [20,20] in the azimuth and elevation dimension. The bandwidth assigned for the SRS is 2.88 MHz, which contains 192.15 kHz subcarriers with a comb-two structure. Ninety-six subcarriers contain the SRS message. The size of the URA for AoA estimation is 8 × 8. Both the EM and subspace methods provide accurate estimation in the high SNR region (>10 dB), and the EM algorithm shows better robustness in the low SNR region (0∼10 dB). The reason for this phenomenon is that the iteration procedures (especially the M-step) in the EM algorithm effectively approach the maximum likelihood estimation. This 3D positioning accuracy is affected by the antenna scales and signal bandwidth. To better understand the antenna scale effect on the RMSE in both the EM and subspace approaches, Figure 6 shows the comparisons under different antenna scales. The signal bandwidth is 2.88 MHz, and the SNR is 9 dB. To achieve better angle estimation accuracy, zero-point-zero-five degrees are set as the angle increment size. With the fixed target position in both cases, the RMSEs of the position estimation decrease with the larger antenna scale. In this comparison, we assume both algorithms know the signal source distance and perform the angle-only estimation. It is evident that the subspace algorithm has the advantage on AoA recognition. ToA accuracy is related to the subcarrier space, while the number of SRS subcarriers is fixed at 96. In 5G NR, subcarrier spacing can be chosen from 15, 30, 60, and 120 kHz. An 8 × 8 URA, 1 ns time step, and 9 dB SNR are used to figure out the subcarrier spacing impacts. Different subcarrier spacings will lead to different signal bandwidths. For the results shown in Figure 7, the performance of the 8 × 8 antenna array is shown under the 9 dB SNR. With the increasing of the subcarrier spacing, both the EM and subspace algorithms show improving positioning resolution. The EM algorithm outperforms the subspace algorithm in the larger subcarrier spacing cases (60 and 120 kHz), while the subspace algorithm outperforms the EM algorithm in the narrower subcarrier spacing cases (15 and 30 kHz).
Impact of the Nearby Reflection
This section discusses the deterioration caused by a nearby reflected signal. Different from the SRSs from the different UEs, the reflection of the SRS from a nearby object has a limited ToA difference from the original SRS. In this simulation, we placed the signal source and reflection point within the minimal resolvable area (1.5 • and 0.3 m target range in the simulation when 8 × 8 is the antenna scale). Assume the signal source is located at a fixed point with a 12 m distance to the base station with [20,20] azimuth and elevation angles, under 9 dB SNR. The reflecting signal is located at 12.6 m with [21,21] azimuth and elevation angles. Under that case, the ToA detection for both the EM and subspace approaches is influenced. The angle and time increment are one degree and 1 ns, respectively. The details are discussed in the following subsection.
• RMSE vs. power: The total bandwidth is 60 MHz, and the SRS uses 2.88 MHz. The reflection signal is set at a fixed range 12.6 m and the [21,21] angles. The EM and subspace algorithms' positioning RMSEs versus different signal power-to-reflection power ratio (SPRP) levels are collected in Figure 8. The EM estimation accuracy decreases with the power of the reflection. However, the influence of the SPRP on the subspace algorithm is more severe (constantly around 0.6, as shown in Figure 5, as the subspace algorithm is more sensitive to the correlated sources.
Limitations and Discussion
From the simulation results shown in Section 4.2, we can see that both the subspace and EM algorithms can successfully localize the SRS source by estimating the angle and delay. The EM algorithm outperforms the subspace algorithm in terms of the position RMSE. However, the current methods have their limitations, which need to be addressed in future work: • Computational load of the subspace algorithm: As can be seen from Equations (6), (10), and (11), the subspace algorithm involves two Kronecker operations and one covariance operation of the Kronecker product. These facts imply that the vector and matrix size will increase exponentially with the number of antenna elements and the subcarriers; moreover, the base station will have to allocate more computing resources for positioning information extraction. Therefore, the development of a computational efficient subspace-based algorithm will pave the wary for applying the subspace-based algorithms in practice. • Multipath propagation: The orthogonality of the SRSs from different UEs avoids the mutual interference to achieve the high positioning capacity. However, the multipath distortions of the ToA parameter estimation is unavoidable because the multiple copies of the signal originated from the same SRS with a recognizable ToA difference are strongly correlated. As can be seen from the results in Section 4.2.2, the estimation RMSE of the position increases if there exists a copy of the SRS close to the UE. Hence, the multipath mitigation schemes with high spatial resolution are the problem to be solved in future work. This example shows that mobility related error exists, and it increases with UE speed. In order to reduce the performance reduction introduced by the mobility of UEs, reducing the observation time can be a straightforward approach.
• Non-line-of-sight positioning: Regarding the non-line-of-sight scenario, this work mainly has two issues that are addressed. The first one is related to the multipath propagation. Multipath indeed makes the estimation difficult as we discussed in the multipath part, when we showed how the nearby reflected signal decreased the estimation accuracy. Furthermore, the position estimation in this case is more challenging than for the line-ofsight, as it has a weak signal strength and a small ToA difference. The second issue is related to the fact that this design can directly estimate the incident angle rather than the reflection angle on the surface where reflection and scattering occur. Thus, this design shall utilize the environment reflection information and the UE's SRS to jointly calculate the target's position. • Requirement of prior knowledge: Both the subspace and EM algorithms in this paper take advantage of the prior knowledge of the SRS and channel equalization to estimate the angle and propagation delay. However, the SRS is only sparsely distributed in the 5G NR frames. These facts limited the effective samples that can be used for positioning purposes.
In addition, both algorithms need to know the number of signal sources to achieve good estimation accuracy. Thus, the blind source separation or blind estimation schemes are also promising topics to explore in the context of 5G NR signal-based positioning.
Conclusions
In this paper, the subspace-and EM-based signal station 3D UE positioning methods in the 5G network are proposed. The positioning function is facilitated by the uplink SRS emitted by the UE and by the antenna array-equipped 5G base station. The 3D positioning performances of both algorithms are investigated under various SNRs, array configurations, channel bandwidths, and multipath scenarios. The simulation results show that both the subspace-and EM-based methods are able to accurately estimate the azimuth/elevation angles and time delay with the 3D signal manifold. We also note that the EM-based method outperforms the subspace-based method in the low SNR region in both the estimation error and resolution. Moreover, the EM-based approach presents the advantage of a lower computational resource cost than the subspace-based approach. Both proposed methods show the capability of achieving different positioning resolutions and accuracy levels by using a flexible signal bandwidth and subcarrier spacing, which is provided by the 5G NR for the different application scenarios. In addition, the orthogonality of the SRSs from different UEs provides excellent conditions for both positioning approaches to detect, localize, and keep tracking a large number of UEs without mutual interference and performance loss. However, the multipath propagation, and especially the reflections close to the UEs, will deteriorate both the accuracy and the resolution performance. Therefore, our future work will focus on the environment-robust 3D positioning method in 5G networks, aiming at better multipath mitigation and better dealing with non-line-of-sight scenarios. In order to change the SRS physical resource allocation pattern with different time slots, frequency hopping is used by controlling the frequency domain index n b to further change the starting subcarrier index k 0 . b hop and B SRS are frequency hopping function controlling the parameters, which are carried in the RRC message. The frequency position index n b in Equation (A5a) has different values depending on the frequency hopping function's on/off status, as shown in Equation (A6).
When b hop is bigger than B SRS , the frequency hopping is disabled, and n b will have a constant value, as shown in the first case of Equation (A6), unless the SRS configuration is reset. Once b hop is smaller than B SRS , frequency hopping is enabled, and n b will be calculated according to the second and third cases in Equation (A6). The F b in the third case of Equation (A6) is defined in Equation (A7), which does not have a specific physical meaning, but is a parameter used in the calculation for simplicity. Furthermore, the quantity n RRC , ranging from zero to 67, is given by the RRC message parameter "freqDomainPosition", which is used for modifying the SRS sequence frequency domain position. It should be mentioned that the value of b in Equation (A7) is equal to N b .
All in all, the Zadoff-Chu sequences are arranged into specific subcarriers and time slots to form the SRS OFDM symbols with the rules defined in the standard; the final SRS information carrying OFDM signal S srs ∈ C N RB sc N RB,UL ×W contains the total SRS sequence r zc ∈ C M SRS sc,b ×N SRS symb . The N RB,UL means the number of RBs allocated to the UEs, and the value depends on the UEs' total bandwidth and subcarrier spacing. Although it varies from case to case, 3GPP [25] has defined the maximum and minimum values of N RB,UL to be 20 and 275, while the subcarrier spacing is less than 240kHz. | 6,717.2 | 2021-02-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
NOSOCOMIAL ACINETOBACTER INFECTIONS IN A TERTIARY FACILITY
Acinetobacter plays an important role in the infection of patien s admitted to hospitals. Acinetobacter are free living gram-negative coccobacilli that emerge as significant nosocomial pathogens in the hospital setting and are responsible for intermittent outbre aks in the Intensive Care Unit. The aim of this stu dy was to determine the prevalence of Acinetobacter in patients admitted into the Intensive Care Unit and determine their role in infections in the ICU. A to tal of one hundred patients were recruited for the study, catheter specimen urine, tracheal aspirate and bloo d culture were collected aseptically from the patie nts. The specimens were cultured on blood and MacConkey and the organisms identified using Microbact 12E (0xoid). The Plasmid analysis was done using th e TENS miniprep method. Fourteen (14%) of the 100 patients recruited into the study, developed Acinetobacter infection. Acinetobacter spp constituted 9% of the total number of isolates. Twelve (86%) of the isolates were recovered from tracheal aspirate , 1(7%) from urine and 1(7%) from blood. All of the i solates harbor plasmids of varying molecular sizes. Ten of the fourteen Acinetobacter were isolated at about the same period of time in the ICU with 6(42.7%) having plasmid size in the 23.1kb band and ll showed similar pattern revealing that the isolates exhibit some relatedness. The clonal natu re of the isolates suggest that strict infection co tr l practices must be adopted in ICU, also an antibioti c policy must be developed for the ICU to prevent abuse of antibiotics that may lead to selection of resistant bacteria.
INTRODUCTION
The control of hospital-acquired infection caused by multidrug resistant Gram-negative bacilli has proved to be a peculiar problem. In the 1970s an increase in the resistant members of the family Enterobacteriaecea involved in nosocomial infections followed the introduction of newer broad spectrum antibiotics in hospitals and this led to an increase in the importance of aerobic Gram-negative bacilli, including Pseudomonas aeruginosa and Acinetobacter spp. Gerner-Smidt (1994).
Acinetobacter is ubiquitous, free-living and fairly stable in the environment Joshi et al. (2006). Due to their distinct adhesive ability to epithelial cells; they have a predilection to colonize skin, especially in the areas of the perineum, inguinal region, axillae, mucous membranes and upper respiratory airway and cause human infections which include pneumonia, septicemia, wound sepsis, urinary tract infection, endocarditis and meningitis. Joshi et al. (2006), Luis Garcıa-Garmendia et al. (2001).
Acinetobacter infections are increasingly implicated in infections in intensive care units and have been cited in up to 17% of Ventilator Associated Pneumonia, second only to Pseudomonas, which was responsible for 19% of Ventilator Associated Pneumonia in ICU Munoz-Price and Weinstein (2008). In a review from the CDC, 7% of ICU-Science Publications AJID acquired pneumonias were due to Acinetobacter in 2003 compared to 4% in 19864% in Weinstein et al. (2005. Infections caused by Acinetobacter are difficult to control due to multi-drug resistance, Gerner-Smidt (1994) this limits therapeutic options in critically ill and debilitated patients especially from intensive care units where their prevalence is most noted. Joshi et al. (2006) Acinetobacter outbreaks have been traced to contamination of respiratory-therapy and ventilator equipment from cross-infection by the hands of health care workers who have cared for colonized, infected patients or touched contaminated fomites Villegas and Hartstein (2003) and Maragakis et al. (2004).
A. baumannii, A. calcoaceticus and A. lwoffii are the Acinetobacter species most frequently reported in clinical literature Garcıa-Garmendia et al. (2001).
There is some data to suggest that the proportion of Intensive Care Unit (ICU)-acquired pneumonia cases being found to be due to A. baumannii is on the increase. In large surveillance studies from the United States between 5 and 10% of cases of ICU-acquired pneumonia were due to A. baumannii. Data on Acinetobacter in Africa is largely limited to South Africa at the present time, although there are scattered reports from other countries in Africa Lowman et al. (2008).
Prior to this report there has been no published data on Acinetobacter infection in the ICU of University College hospital Ibadan. Acinetobacter infection in literature is associated with high morbidity, mortality and increased length of hospital stay especially amongst patients in ICU, this has been attributed to its ability to acquire and up regulate resistance genes Garcıa-Garmendia et al. (2001).
The aim of the study was to determine the prevalence of Acinetobacter in the intensive care unit and also to determine the relatedness of the isolates.
MATERIALS AND METHODS
This cross sectional study was carried out in the University College Hospital (UCH) Ibadan, Nigeria. A total of one hundred patients were recruited into study from the ICU which is12 bedded and has a monthly turnover of 25 patients. This population of patients comprised of patients who have had surgery and are on ventilators or intubation with a prior history of antibiotic use and those who have had any form of instrumentation.
Ethical approval was obtained from the University of Ibadan/University College Hospital joint Ethical Committee.
A written informed consent was also obtained from guardians, spouse, parent or caregiver of each participant, thereafter relevant medical history, socio-demograhic data and other information obtained from the care giver and case files were entered into a study proforma.
Specimen Collection and Transport
Tracheal aspirate, blood and catheter specimen urine were collected from all recruited patients for microscopy, culture and sensitivity. Specimens were collected using aseptic technique to prevent contamination. For optimal results, specimens were collected in clean sterile, wide bore containers. The samples were collected in patients who had spent at least 48 h in ICU.
Microscopy
A Gram stain was done on smears made from specimens and then viewed under the light microscope at ×100. Classically Acinetobacter spp appear as; short, plump, Gram-negative rods.
Culture and Identification
The specimen was inoculated on MacConkey agar and blood agar and incubated at 35-37°C for 18-24 h. Acinetobacter species grew on MacConkey agar appearing as non lactose fermenters.
All Gram-negative coccobacilli isolated were tested for catalase and motility. All catalase positive, nonmotile Gram negative coccobacilli were subjected to an oxidase test. All oxidase negative organisms were inoculated into peptone broth for about 30 min. Subsequently 1 mL of the broth was inoculated into the various cups of Microbact Identification kit (Oxoid) and incubated for 18-24 h. After the stated period, Gram negative coccobacilli were identified as Acinetobacter spp based on the reactions on the identification panel which was read with the help of the identification software that accompanied the kit.
Plasmid Analysis
The plasmid analysis was done in collaboration with the molecular biology laboratory of the Nigerian Institute of Medical Research Yaba Lagos, Nigeria, using the TENS method. This was done to determine the relatedness of the isolates.
Plasmid Extraction
About 1.5 mL of overnight culture was centrifuged at 10,000 rpm for 1 min in a micro-centrifuge to pellet cells. The supernatant was gently decanted leaving 100 µ L together with cell pellet; the resulting Science Publications AJID suspension was vortexed at high speed to re suspend cells completely.
A volume of 300 µL of TENS was added to the mixture and mixed until the mixture becomes sticky, 150 µL of 3.0 M sodium acetate pH 5.2 was added to the mixture and vortexed to mix completely.
The mixture was spun for 5 min in a micro-centrifuge to pellet chromosomal DNA and supernatant was discarded, the pellet was rinsed twice with 1 mL of 70% ethanol and the pellet re-suspended in 40 µL of distilled water for further use: (TENS composition: Tris 25 mM, EDTA 10mM, NaOH 0.1N and SDS 0.5%)
Agarose Gel Electrophoresis
Agarose Gel Electrophoresis was the separation method used to separate DNA based on their molecular weight. A load of 0.8 g of agarose powder was added to 100 mL of TAE buffer and dissolved by boiling. It was allowed to cool to about 60°C then 10 µL of ethidium bromide added and mixed by swirling gently. The agar was poured into electrophoresis tray with the comb in place to obtain a gel thickness of about 4-5 mm. It was allowed to solidify then comb removed and the tray placed in the electrophoresis tank.
TAE buffer was poured into the tank ensuring that the buffer was covering the surface of the gel. A volume of 10 µL of sample was mixed with 2 µL of the loading dye and the samples were carefully loaded into the wells created by the combs. The marker was loaded on lane 1 followed by the samples. The electrodes were connected to the power pack in such a way that the negative terminal was at the end where the sample has been loaded. Electrophoresis was run at 60 volt until the loading dye had migrated about three-quarter of the gel. The electrodes were disconnected and turned off and the gel viewed on a UV-transilluminator: TAE: (Tris, Acetic acid and EDTA)
Data Analysis
All data were analyzed using the Statistical Package for the Social Sciences (SPSS) version 15.0. Data were presented using frequency tables, charts, as appropriate and cross tabulation to study relationships and association between variables.
RESULTS
A total of one hundred patients were recruited into the study over a period of nine months. The age of the patients ranged from 2 years to 95 years. Majority of the patients (40%) were in the 31-40 year age group while the 10-20 year age group constituted the least age group (4%). There were 52 males and 48 males, giving a male to female ratio of 1.08:1. Forty Eight (48%) were Christians, forty seven (47%) were Muslims and 5% were traditionalists. Eighty one (81%) of the patients were admitted from the Accident and Emergency unit while 19% were from other wards in the hospital, Fifty eight (58%) of the patients were resident in Ibadan while forty two (42%) of the patients were from outside Ibadan ( Table 1). Acinetobacter spp was isolated from fourteen (14%) of the total number of patients recruited into the study and was responsible for 14% of infections in the ICU based on evaluation of clinical charts. It represented 9% of the isolates from all the specimens collected during the study period. (Table 1) Acinetobacter was responsible for 1% of urinary tract infections, 14% of respiratory tract infections and 1% of blood stream infections.
All the isolates in the study harbor plasmids of different molecular sizes. Ten of the fourteen isolates were isolated at about the same period in the ICU with 6(42.7%) having plasmid size in the 23.1 kb band and having similar pattern showing that they may be related. The remaining 9(67.3%) did not show any appreciable bands that were related to the others (Fig. 2)
DISCUSSION
The prevalence of Acinetobacter infection in this study was 14%, this high rate of Acinetobacter infection may be attributed to the poor infection control practices in the ICU of the Hospital. The observed prevalence is higher than reports from similar studies carried out in France by Joly-Guillou (2005) who reported 9% and Iregbu et al. (2002) who reported 4.6% in Lagos, Nigeria.
AJID
Acinetobacter constituted 9% of all isolates in the study, this finding is low compared to 14.5% obtained by Kessaris et al. (2006), 13.9% by Raka et al. (2004) but higher than 8.4% reported by Oberoi et al. (2009) and 3% reported by Iregbu et al. (2002). This may be because their study included all patients in the hospital compared to this study which was limited to the ICU.
The incidence of Acinetobacter blood stream infection in this study was 1.3%, this is a little lower than 2% reported by Michalopoulos and Falagas (2007) and 8.8% reported by Garcıa-Garmendia et al. (2001) This difference may be due to the lower number of patients recruited in the study.
Prior to the introduction of current advanced resuscitatory devices, Acinetobacter spp were mainly recovered from the urinary tract however with the advent of mechanical ventilation most of the organisms are now being isolated from respiratory tract specimens. In this study, majority 12(86%) of the isolates were recovered from tracheal aspirate. This finding is similar to the 87% observed by Raka et al. (2004) but much higher than 46.5% reported by Popescu et al. (2011). The high rate of recovery from the respiratory tract may be due to the invasive procedures that are carried out in the respiratory tract in the process of maintaining the airway. The low recovery rate (1%) of Acinetobacter spp from the urinary tract is similar to the 1.6% rate reported in the NNIS study Weinstein et al. (2005). This further confirms the popular report that Acinetobacter is no longer a common uropathogen.
Severe underlying diseases, invasive diagnostic and therapeutic procedures used in ICUs have been demonstrated to predispose patients to severe infections with Jarousha et al. (2008). In recent years, A. baumannii has become an important pathogen especially in intensive care units. Persistence of endemic A. baumannii isolates in ICU seems to be related to their ability for long-term survival on inanimate surfaces in patient's immediate environment and their widespread resistance to the major antimicrobial agents Oberoi et al. (2009). Our finding of A.baumannii as the major pathogen in this study is therefore not too surprising as this has been reported earlier by Joshi et al. (2006).
All the clinical isolates of Acinetobacter in the study harbor plasmids of different molecular sizes. Ten of the isolates were isolated at about the same period of time with 6(42.7%) having plasmid size in the 23.1 kb band and exhibit similar patterns showing that they may have originated from the same clone and a signal to potential outbreak strain. Outbreaks of Acinetobacter infections are linked to contaminated respiratory equipment, intravascular access devices, bedding materials and transmission via hands of hospital personnel.
Acinetobacter outbreaks have been traced to commonsource contamination, particularly contaminated respiratory-therapy and ventilator equipment, crossinfection by the hands of health care workers who have cared for colonized or infected patients or touched contaminated formite and to the occasional health care worker who carries an epidemic strain Raka et al. (2004).
CONCLUSION
The ICU is responsible for providing life support services to patients from diverse specialties. The isolation of Acinetobacter among critically ill patients in the ICU is a cause for concern. There is an urgent need for education of health care workers in the ICU on proper infection control practices. There is also a need for active surveillance for Acinetobacter spp in the ICU. | 3,367 | 2012-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Searchable encryption algorithm in computer big data processing application
With the continuous development of computer technology, the amount of data has increased sharply, which has promoted more and more diversified data transportation and processing methods. At the same time, computer data analysis technology can effectively process data. This is reflected in the computer big data analysis technology not only can realize data visualization analysis, but also has data prediction and data quality management. The development of cloud computing network technology can not only provide convenience points for individuals, but also provide space for enterprises to store data. The emergence of keyword search encryption algorithms solves this problem. When users use keywords to search encryption algorithms, they can search for cipher text keywords to find the files or data they want in the cloud environment. At present, it has been widely used. In addition, this article also improves the keyword search plan and the user's query plan according to the dynamic changes of keywords, and proposes a user's multi-dynamic keyword search encryption plan. Through this program, users can search for encrypted files by keywords and change them, and the changed data will be dynamically updated. In this way, the program can realize multi-user data sharing, and can realize efficient search and dynamics.
Introduction
With the continuous development of science and technology, traditional computing modes and computing algorithms can no longer meet the needs of current computers, which requires the development of new technologies and algorithms.Cloud computing is one of the new algorithms.Its emergence solves the problems of data calculation and storage.Now it has been widely used in all walks of life.Its concept has also inspired the development of other technologies.This article focuses on cloud computing.Conducted research to provide a theoretical basis for the development of science and technology.Under the current background of rapid development of science and technology, whether the efficiency and quality of data processing can be guaranteed is the key to restricting the development of science and technology, which requires the relevant data processing technology to have highenergy data processing capabilities and secure encryption technology.Therefore, in order to achieve this goal, this article focuses on a detailed study of computer to big data processing technology and cloud computing technology, hoping to find a breakthrough for the development of data processing technology and open up a path for future development.Cloud computing is currently a hot data processing technology, which can combine storage resources, software resources, computing resources, etc. through clusters.Users can connect to the cluster through the network to obtain resources and storage space in the cluster.Whether the use of cloud computing can effectively reduce the waste of resources and increase the efficiency of computing at the same time, this is also the focus of academic research.However, there are still certain problems in the development of cloud computing technology [1].The first thing to bear is the security issue of cloud computing.If you want to achieve the healthy development of cloud services, it is necessary to solve the security problems of existing cloud computing technologies, so that enterprises and individual users can use cloud computing to solve problems with confidence [2].Among them, the existence of cloud storage technology allows data to be used efficiently.In order to reduce resource consumption and improve business level, small businesses or individual users often like to store data in cloud space, because the cost of storing data in cloud space is much lower than other methods.However, cloud space is a third-party storage organization, and there are still certain hidden dangers in terms of security.If users directly store data in the cloud space without processing, once there is a loophole in the cloud storage space, the user's information security cannot be guaranteed [3].Therefore, it is not enough to just put the data in the cloud space, and it is necessary to rely on secure encryption services in addition to the cloud services provided by the cloud service provider.However, the encrypted data does not have the original language structure and characteristics, and the user cannot retrieve it, which is not conducive to the user's use.In order to solve this problem, the encrypted file needs to be downloaded and decrypted first, so that the encrypted data is converted into plaintext again, and it is much easier to retrieve the plaintext.The ciphertext search technology has two methods: fulltext search and ciphertext search.The former directly searches the full-text to obtain the target ciphertext directly, while the latter is realized by searching keywords.
Related work
The literature proposes that the KES scheme has the following four steps: (1) Generate a key, and the generated key needs to be kept by the user; (2) Encryption, which is mainly to encrypt the index, and upload it to the server after being encrypted locally.In the first process, the ciphertext is not leaked; (3) Generate search credentials, and users can obtain search credentials through the key and upload them to the server.However, the fact that the server obtains the search credential does not mean that it can search the information stored by the user.(4) Search.After obtaining the search credential, the user's key is also needed to perform a ciphertext search and return the qualified data.The server cannot obtain redundant information in this process.The literature is the first batch of literature to study KES.In this literature, it includes an extension to the KES program, and a multi-keyword search method is also designed [4].The literature found that some scholars have proposed an efficient and dynamic update searchable encryption scheme with time complexity [5].This scheme retains the advantages of the original scheme while optimizing the encryption method of the data structure, and adds data addition and deletion query tables, and operations on files [6].The process was documented.Later, someone improved the scheme, forming a dynamic update search scheme that supports the red-black tree structure, so that the scheme can be carried out in parallel, and the efficiency can be improved by taking advantage of the advantages of multi-processors [7].The literature proposes the first many-to-one model of KSE scheme, which can be applied to the service scenario of routers.In this scenario, the sender and recipient of the file are two different users, and the server plays a role in it [8].The function is to filter the router information.The literature believes that KR-PEKS is constructed in accordance with the K-resilient IBE scheme, and can be expanded on this basis: on the one hand, it can support tube detection and search; on the other hand, it can also remove the safe passage [9].Compared with BDOP-PEKS, the processing efficiency of this scheme is high, and it does not even need to use two-line row operation [10].But it is necessary to set the security parameters, otherwise the number of queries that can be inquired cannot be controlled [11].When setting parameters, adjust them to appropriate sizes according to actual needs [12].If the parameters are too large or too small, it is not conducive to the operation of the program.The literature points out whether the conversion of the IND-CKA's security plan into a security plan can meet the dual requirements of computing and security [13].The literature believes that there are many schemes based on the many-to-one model, which will not be repeated here [14,15].These schemes can search for keywords and perform stepwise encryption calculations on data to generate ciphertext.When searching, users can generate search credentials based on keywords and ciphertexts, so that the server can perform calculations.It should be noted that this solution relies on BDH mathematical assumptions and requires the use of the double-line feature of mathematical assumptions.
Keyword searchable encryption basics
Definition 1: There is a set G = {g 1 ,g 2 ,g 3 . . .}.If the set G pair operation * satisfies all the following conditions, it is called the group < G, * > : Closeness: if g i , g j ∈G, there is gk∈G, satisfying g i * g j = g k Associative law: if g i , g j , g k ∈G, there is always (g i * g j ) * g k = g( i * ) (g j * g k ) There is identity element: there is g e ∈G, for any gk∈G, there is always g k * g e = g k * g e = g k There is an inverse element: for any g i ∈G, there is always g i * g j = g j * g i = g e , where g e is the identity element.
Definition 2: The number of group elements is called the order of the group, denoted as |G| Definition 3: There is a group < G, * > , where G is a non-empty set, and * is an operation on the set.If there is g∈G, for any element gi∈G, it can be expressed as g i = g ∧ n, and n is an integer, then the group < G, * > is a cyclic group, and g is the generator of the group.Research based on bilinear pairing has greatly promoted the development of cryptography research.
The Lagrange interpolation theorem can be used to realize secret sharing.Based on the theorem, KP-ABE and CP-ABE schemes can be constructed.The Lagrange interpolation theorem is as follows.
Then the polynomial can be determined as: The access tree can flexibly express the access authority control, so some ABE schemes are implemented by using the access tree.When checking whether the user attribute matches the authority corresponding to the access tree T, it is necessary to let R be the root node of the T tree.If x is a non-leaf node, the child node x'of x needs to be calculated.
Cryptography relies on the difficulty of breaking through mathematical problems to ensure the security of the scheme.If a cryptographic scheme passes strict proofs and can be simplified to a certain mathematical problem, then the scheme is considered safe.This section mainly introduces the definition of the mathematical problems involved in this article.
Random prediction models can be used to analyze the security of cryptographic schemes.The random prediction model is defined as follows: It satisfies the operation H: {0,1} ∧ * →{0,1} ∧ * →, and has the following three properties: Uniformity: If the input is random, the output distribution is uniform; Determinism: If the input is the same, the output is also the same; Validity: the output result of the polynomial.
In order to meet certainty and uniformity, the output entropy value of the random prediction model must be less than the input entropy value, and entropy theory stipulates that the output entropy value of the deterministic function should not be greater than the input entropy value.Therefore, ROM is an ideal model and cannot be truly realized.In the implementation of the algorithm, the random prediction model is instantiated by a specific one-way hash function.
The scheme based on the standard model does not rely on a random vector machine, but only relies on breaking through mathematically difficult problems to ensure safety.Therefore, the security of the standard model is higher than that of the random vector model.It can be seen that the encryption algorithm design based on the standard model is still a research hotspot.
D-ATTR-PEKS scheme design
This chapter first introduces the classic structure of the public key-based KSE scheme, and points out the shortcomings of several schemes and corresponding solutions.The existing PEKS schemes are all extended on the basis of the scheme proposed by Boneh et al.The description of the program is as follows.
(1) Keysgen(s)→(A pub ,A prv )→: s is the security parameter, (A pub ,A prv ) are the public key and private key respectively According to the safety parameters, calculate (2) PEKS(A pub ,W)→S: A pub is the public key, W is the search keyword, and S is the ciphertext.
The algorithm is executed by the data sender, rεZ p ∧ * is randomly selected, and the calculation (3) Trapdoor(A priv ,W)→T w : A priv is the user's private key, W is the search keyword, and T w is the search credential.
The algorithm is executed by the data receiver and calculates This solution satisfies the security of CKA, that is, it can only guarantee the security of PEKS ciphertext, and users cannot obtain any information about keywords from PEKS ciphertext.Someone pointed out that the PEKS solution must establish a secure channel, otherwise the attacker may intercept the user's query results, and even further use the data to obtain user information.Therefore, it was pointed out that SCF-PEKS cannot resist offline keyword guessing attacks.This is because the keyword ciphertext space is not as large as the keyword space, and generally speaking, the keyword selection entropy is low, so the attacker can crack the dictionary attack.The dPEKS solution introduces the concept of a trusted server designated by the user, allowing users to choose a trusted server to upload ciphertexts.When other users need to search, the trusted server must predecrypt the search credentials uploaded by the user.If intercepted, the attacker will not be able to obtain information about the certificate.The dPEKS scheme is described as follows: (1) Setup(()): s is a safety parameter This algorithm is executed by an authorized institution and can generate hash functions H 1 :{0,1} ∧ * →G 1 and H 2 :G 2 →{0,1} ∧ logp (2) Keygen server →(sk s ,pk s ): (sk s ,pk s ) is the public and private key pair of the search server The algorithm is executed by an authorized institution, and αεZ p ,QεG are randomly selected, and the calculation Among them, sk s is the private key of the search server, which is saved by the search server, and pk s is public as the public key of the search server.
(3) Keygen server →(sk s ,pk s ): (sk s ,pk s ) is the receiver's public and private key pair The algorithm is executed by an authorized institution, x∈Z p is randomly selected, and the calculation sk r = x, pk r = g x (6) (4) dPEKS(pk r ,pk s ,W)→C: pk r is the server public key, pk s is the user public key, W is the encryption keyword, C is the searchable ciphertext.
The algorithm is executed by the data sender, selects a random number r∈z p , and calculates C = {A, B} = {(pk r ) r , H 2 (e(y s , H 1 (w) r ))} (7) (5) Trapdoor(sk r ,W)→T W , sk r is the user's private key, W is the search keyword, and T w is the generated search credential.
The algorithm is executed by the data receiver, r ∧ '∈Z P is randomly selected, and the calculation The algorithm is executed by the search server and first calculates The attack method of this scheme is described as follows: The attacker is interested in the keyword set When a user initiates a search request to the server, the attacker will monitor and intercept the search credentials.After the server receives the search credentials, it will traverse all the ciphertexts uploaded by the user and calculate them one by one, including the ciphertext uploaded by the attacker maliciously.
The attacker monitors the server and intercepts the search results it sends to the user.If the file ID sent by the attacker exists in the result, the attacker will get the keyword corresponding to the search credential and know the keyword requested by the search user.The SPEKS scheme designed by Chen et al. can be used to defend against online keyword guessing attacks.Users can use session information to decrypt search results locally to ensure that attackers cannot obtain the information.
The above solutions all take the security of the KSE solution as the entry point.If the KSE solution is deployed in a cloud environment, the multi-user situation must be considered.CP-ABE is used to solve the many-to-many access problem in the cloud environment.Access control permissions can be set for ciphertext, and file access can be performed only when the user attributes comply with the access policy.The procedure is described below.
(1) Setup(t)→(PK,MK): t is the security parameter, (PK,MK) is the system public and private key pair The algorithm is executed by an authorized institution, and generates p-order groups G 1 , G 2 according to the security parameters.At the same time, generate a random vector model H 1 :{0,1} ∧ * →G 1 and a one-way hash function H 2 :{0,1} ∧ * →z p .Randomly select a, b, c∈z p , and the generator g∈G.Calculation PK = {g a , g b , g c }, MK = {a, b, c} (2) keyGen (MK, S) → SK: MK is the system master key, S is the user attribute set, and SK is the user private key The algorithm is executed by the authorized institution, rεZ p is randomly selected, r j ∈Z p is randomly selected for each element j in the set S, and finally generated (3) Enc(w,T)→Cph: w is the search keyword, T is the access control tree, and Cph is the searchable ciphertext The algorithm is executed by the data sender, randomly selecting r 1 ,r 2 Z p , calculating W = g ∧ cr1, W 0 = g ∧ (a(r 1 + r 2 )) g ∧ (bH 2 (W) R 1 ),W ∧ ' = g ∧ (Br 2 ).The access control tree is defined in the same way as the CP-ABE algorithm.The root node of the policy tree T is set to r 2 , and the child nodes are assigned random polynomials.The calculation is w j = g ∧ (qv(0)), D j = H 1 (j) ∧ (qv(0)).Finally got Cph = {T, W, W 0 , W , j ∈ Attr(T), W j , D j } (4) TokenGen (SK, w) → TK: SK is the user's private key, w is the query keyword, and TK is the search credential.
The algorithm is executed by the data receiver, randomly selects s Z p , and calculates tok 1 = (g a g bH 2 (W) ) S , tok 2 = g cs ,tok 3 = A s = g (acs−rs) /b .Calculate A j ∧ ' = A j ∧ s, B j ∧ ' = B j ∧ s for each attribute, and finally generate search credentials TK = {tok 1 , tok 2 , tok 3 , ∀j ∈ S, A j , B j } ( (5) Search (TK, Cph)→b: TK is the search credential, Cph is the key word ciphertext, output bε0,1∈ The algorithm is the same as CP-ABE, first calculate Use the recursive algorithm to get e(g,g) rsqv(0) = E root .
Final judgment e(W 0 , tok 2 ) = e(W, tok 1 )E root e(tok 3 , W ) ( If the equation is true, output 1; otherwise, output 0. However, this solution does not consider offline keyword guessing attacks.
In the above-mentioned classic construction scheme, although PEKS implements the KSE scheme, it does not consider the problem of keyword guessing attacks.In addition, the above solutions do not support many-to-many models, so they are not suitable for cloud storage environments that support data sharing.CP-ABKS supports multiple users, but it cannot resist offline keyword guessing attacks.
The cloud storage server can store the user's resources and information.Authorized institutions are responsible for establishing encryption systems, and generating and issuing public and private keys and user keys for search servers.The data owner is the owner of the data.When the data owner needs to upload data, he can use any encryption technology to encrypt the file, and then use the scheme proposed in this article to encrypt the keywords.The search server is mainly responsible for searching.And the search server can use the search credentials uploaded by the data user to search for the ciphertext.The search server will return the correct information only when the user authority meets the encryption keyword search authority control, and the search keyword matches the file keyword.
The scheme proposed in this paper has the following 5 types of roles, as shown in Figure 1.
The solution in this paper is composed of algorithms such as search server key generation, user private key generation, keyword encryption, search credential generation, keyword search, search result encryption, and search result decryption.
This section analyzes the correctness of the Test algorithm, as shown in Equation 17.Secondly, according to Equation 18, the intermediate result Y is obtained, as shown in Equation 19.Finally, according to Equation 20, the intermediate result Z is obtained, as shown in Equation 21.
Table 1 mainly compares from five aspects: Combined with the functional analysis in Table 1, the solution in this article does not require a secure channel, can effectively resist external intrusions, and is more suitable for running in a cloud environment.
The program analyzes the execution efficiency through experiments, and selects the A-type elliptic curve in JPBC. Figure 2 shows the execution efficiency of a typical algorithm.
The experimental results show that each algorithm is roughly proportional to the number of attributes, so the time spent in the solution in this paper is within an acceptable range.
MU-DSSE scheme
Improving the search efficiency of the KSE scheme based on the public key is still a hot research topic.There are two ways to improve search efficiency: use more effective mathematical operations instead of pairing operations, and someone proposed a DSSE scheme that supports effective dynamic updates.The program also introduces a delete array and its corresponding quick reference table to save deleted file information.
The establishment of an inverted index is shown in Figure 3.
The DSSE scheme builds an index model as shown in Figure 4.
The DSSE scheme is very efficient, but it does not support multiple users.The Mu-MQ solution supports multiple users, but it is not efficient.Therefore, the MU-DSSE scheme proposed in this section uses the idea of the Mu-PQ scheme to extend the DSSE scheme to a multi-user scheme.In this solution, a trusted group is formed for users who have access to data, and each user in this group has a key for searching.Using this keyword, you can effectively implement index search, index addition, and index deletion.The user can revoke the user or add the user to a trusted group at any time.
The scheme proposed in this section has the following three types of roles: user groups, authorized institutions, and cloud search servers, as shown in Figure 5.
The program is described as follows.Because the research focus of this article is to search and encrypt keywords and encrypt files.In order to use any encryption method, it is not mentioned in the plan.
1.The initialization algorithm is executed by the authorized agency, and the authorized agency generates a bilinear group and combines two random vector machines Save a, b, and c as the system key, as shown in equation 22.
2. The algorithm is executed by an authorized institution, and the authorized institution distributes keys for users in the user group.Then calculate according to formula 23 3. The algorithm is executed by the user.The user uses the private key to generate the search credential according to formula 24, and hand the search credential to the server.This section compares the proposed scheme with the classic multi-user KSE scheme and explains the advantages of this scheme.Table 2 mainly compares search efficiency, system model, whether it supports dynamic update, whether it supports multi-user update, and whether it supports flexible exit.
Next, perform the efficiency through experimental analysis. (
1) O(1) algorithm efficiency
Complexity algorithm just keep the number of pairwise operations or exponential operations, and its efficiency is shown in Table 3. (2) Buildindex algorithm When constructing the index, the algorithm needs to interact with the server, so it is divided into an interactive phase and a local execution phase.As shown in the step3 bar graph in Figure 6.The time and keywords used in the three steps of the interactive phase are roughly proportional to the total number of file IDs.The time spent in the three stages gradually decreases.
The local execution stage is the stage where the client builds and encrypts the index.The time consumed in this stage is mainly related to the length of the SearchArray and DeleteArray constructed.
(3) Add operation
In the "Add Algorithm Efficiency Test", let the server save 100,000 files, each file contains 5 keywords, of which files and keywords are randomly generated test data.The test result is shown in Figure 7.The X axis is the number of file index keywords added, and the Y axis is the algorithm execution time.According to the experimental results, the time complexity of the algorithm is O, and it can be carried out relatively quickly.According to the specific form of the algorithm, the operation of this algorithm is similar to the Delete algorithm.
Because the delete algorithm needs to decrypt larger data items, it is faster than the delete algorithm.This section compares the two schemes of MU-DSSE and D-ATTR-PEKS, and the comparison results are shown in Table 4.
Research on the main overview of current cloud computing and big data
Big data refers to data that is difficult to process by conventional software and technology in a short time, so new processing methods and technologies are needed to process this part of the data.Among them, diversification refers to the diversification of information data formats and data types in the context of big data; and rapidity refers to the fact that in the context of big data, the transmission efficiency of information is higher, the processing speed is faster, and the information can be processed in a timely manner.Data is updated; while data review refers to the existence of some loopholes in the computer; the large amount of data refers to the large amount of big data computer information data processing, and it is also showing a trend of increasing day by day.
Both cloud computing and big data analysis need to be paid.Users can pay related fees according to their own needs, so as to obtain the resources actually needed.Data analysis is an important part of the big data processing process.The data is obtained and integrated and processed through related methods.In this process, the different values of the data are reflected.The combination of cloud computing network technology and big data can produce wonderful collisions: (1) In the cloud computing environment, different network users can obtain related resources according to their needs, which greatly extends the width of data information, and can also pass data Information access to network resources; (2) Improving the refinement of data analysis can deeply dig out the value of data, and it can also improve the application capabilities of software through cloud computing-based data analysis, thereby reducing the cost of data analysis.The advantages of combining data and cloud computing.
First of all, it is necessary to improve the level of data processing capabilities, which can help the system reflect the actual situation more objectively and comprehensively; on the other hand, it can also provide a theoretical basis for decision makers.In addition, enhancing data processing capabilities can also dig deeper into the value of data and make it more costeffective.Relevant departments and practitioners can dig deeper into the essence of data through research and data, and realize the sublimation of perceptual understanding of data.
Analysis of the basic processing flow of big data
Traditional data processing and input methods cannot meet the needs of massive data processing.If you want to process massive data, you must process and analyze the data in a short time, which requires more advanced information processing technology.
Big data processing is generally divided into four stages, which are as follows:(1) Data collection stage: With the increase in the number of Internet users, the amount of data inside the Internet has also begun to surge.In the face of such a large and complex data resource, how to collect data efficiently has become the key to big data processing.(2)Data processing and integration stage: The types involved in the data processing stage are more complex, and there are many redundant data that need to be deleted.Finally, the data with different formats is converted into a unified data format, so that the data processing can be more It's convenient and fast, and the most common processing method is a filter.(3)Data analysis stage: After the data is processed with a unified data structure, further analysis is required, and applications are classified according to the value of the data, and the data is processed centrally through various tools.At present, in the process of data analysis, there are already many products and software that specialize in big data analysis, which is of great help to the improvement of efficiency.(4)Data interpretation: Through this technology, the value of data and analysis results can be fully displayed to users, thereby improving the efficiency of users' application to the world and expanding the use value of data.
Analysis of the advantages and disadvantages of big data
The most prominent advantage of big data analysis is that it can visualize digitized information.In addition, big data data mining algorithms can enable relevant practitioners to mine the internal value of the data, thereby improving the cost-effectiveness of the data.In addition, big data analysis can also be applied to various fields for data prediction and data analysis.Data prediction technology is generally applied to fusion modeling technology, and new data fusion is carried out on the big data model.
Due to the increase in employees, the spread of data on social media has become more and more widespread.In this case, people's privacy is often leaked.Moreover, in a large amount of data, it is inevitable that some false or harmful information will be mixed, which not only affects the user's experience, but also triggers a series of uncontrollable events.Cloud computing network technology has four outstanding advantages, the specific analysis is as follows(1) Reduce computer costs;(2) Improve performance;(3) Almost unlimited data storage capacity;(4) Higher data storage security.
Conclusion
The rapid development of cloud technology has opened up a new path for data processing and storage.Due to its convenience and high efficiency, more and more enterprises and individual users have begun to store data in the cloud space.For some private data, most users choose to use encryption to ensure data security, but the encrypted data often fails to reflect the structural and semantic characteristics, which makes the encrypted documents unable to be retrieved by users in the cloud space.Keyword searchable encryption is the key technology to solve the above problems.It is a special encryption technology that can solve the retrieval problem after data encryption.The use of this technology allows users to find encrypted documents by searching for keywords, and ensures that intruders cannot obtain users' private information through keyword ciphertext searches or search credentials.This article has completed the following tasks by discussing the KSE program.(1) The existing scheme has been improved.The improved scheme can obtain the server through self-searching, so that the KSE scheme can resist the intrusion of bad external information, and does not need to use a secure channel.It can also be encrypted by combining attributes.Allows users to access the system through a variety of schemes.This solution has passed the four aspects of safety, correctness, functionality and performance tests, which fully proved the advantages and feasibility of the improvement; (2) Designed and constructed a cloud video sharing system, which combines the MU-DSSE solution it is applied to the storage scene of cloud video.With the development of information technology, the application of big data is becoming more and more extensive, which makes the processing technology of computer information gradually develop in the direction of informationization and large-scale.Therefore, more advanced and powerful technologies should be adopted to strengthen the computing power and storage capacity of the computer, so that the performance of the computer can keep up with the requirements of the development of the times.In the era of big data, the continuous increase of information data has brought certain difficulties to computer information processing.Therefore, it should be used for innovation, active improvement, and continuous enhancement of
( 6 )
dTest(C,T)→b input ciphertext C and search credentials T w
T w = {uid, T = h(w)K uid } (24) 4. The algorithm is executed by the cloud search server.The server obtains the user search credential T W , and finds the corresponding PK uid according to the uid.Calculate according to formula 25 and 26 respectively F(w) + e(AK uid , T w ) (25) G(w) + e(BK uid , T w ) (26)
Table 2 .
Comparison of scheme functions.
Table 4 .
Comparison of schemes. | 7,578.2 | 2023-09-11T00:00:00.000 | [
"Computer Science"
] |
First passage percolation on nilpotent Cayley graphs and beyond
Our main result is an extension of Pansu's theorem to random metrics, where the edges of the Cayley are i.i.d. random variable with some finite exponential moment. Based on a previous work by the second author, the proof relies on Talagrand's concentration inequality, and on Pansu's theorem. Adapting a well-known argument for Z^d, we prove a sublinear estimate on the variance for virtually nilpotent groups which are not virtually isomorphic to Z. We further discuss the asymptotic cones of first-passage percolation on general infinite connected graphs: we prove that the asymptotic cones are a.e. deterministic if and only the volume growth is subexponential.
Introduction
First passage percolation is a model of random perturbation of a given geometry. In this paper, we shall restrict to the simplest model, where random i.i.d lengths are assigned to the edges of a fixed graph. We refer to [GK12,Ke86] for background and references. A fundamental result (the shape theorem) states that the random metric on Euclidean lattices when rescaled by 1/n, almost surely converges to a deterministic invariant metric on the Euclidean space [CD81,Ke86]. Underling this theorem is the simple fact that the graph metric on the Euclidean grid, when rescaled, converges to the metric associated to the ℓ 1 -norm on the Euclidean space. In the world of Cayley graphs, a version of this last fact holds and characterizes polynomial growth: by a theorem of Gromov [Gr81], groups of polynomial growth are virtually nilpotent, and by a theorem of Pansu [Pa], the rescaled sequence converges in the pointed Gromov-Hausdorff topology to a simply connected nilpotent Lie group equipped with some left-invariant Carnot-Caratheodory metric. It is therefore natural to ask if when assigning random i.i.d. lengths to Cayley graph of polynomial growth, the rescaled metric almost surely converge to a deterministic metric on the Lie group. Establishing this was the original goal of this note. As we will see below, we end up getting a very general statement (though for a weaker notion of convergence) for first-passage percolation (FPP for short) on general graphs with bounded degree.
Before stating our main results for nilpotent groups, let us illustrate it in a concrete case. The notion of (pointed) Gromov-Hausdorff convergence will be informally recalled along the way (see [BBI01] for a more thorough introduction to this notion).
1.1. A motivating example: the Heisenberg group. Recall that the real Heisenberg group H(R) is defined as the matrix group and that the discrete Heisenberg H(Z) sits inside H(R) as the cocompact discrete subgroup consisting of unipotent matrices with integral coefficients. We equip the group H(Z) with the word metric associated with the finite generat- By Pansu's theorem [Pa], the sequence of pointed metric spaces (H(Z), e, d S /n) converges in the Gromov-Hausdorff sense to H(R) equipped with the (unique up to isometry) Carnot-Caratheory metric which projects to the ℓ 1 -metric on R 2 . More explicitly, consider the one-parameter group (δ t ) t∈R * + of automorphisms of H(R) defined as follows It is a standard fact that given a norm · on R 2 , there exists a unique leftinvariant Carnot-Caratheodory d cc metric on H(R) that projects to · and that is scaled by δ t , i.e. such that d cc (e, δ t (g)) = td cc (e, g) for all t ∈ R * + and all g ∈ H(R). Such an automorphism δ t (for t > 1) is called a dilation. Now, consider the sequence of embeddings of δ 1/n • i : H(Z) → H(R), where i is the standard embedding described above. This can be interpreted as a sequence of maps φ n of pointed metric spaces (H(Z), d S /n, e) to (H(R), d cc , e). Pansu's Theorem says that φ n is a sequence of Gromov-Hausdorff approximations: i.e. for all ε > 0, there exists n 0 such that for all n ≥ n 0 , • for all g, h ∈ B S (e, n/ε), In particular, one deduces the so-called "asymptotic shape theorem" saying that for every r > 0 the sequence K n = φ n (B S (e, rn)) of compact subsets of (H(R), d cc ) converges for the Hausdorff metric to B cc (e, r).
Now assume that the length of each edge f in the Cayley graph (H(Z), S) is given by an i.i.d. random variable which -says equals 1 or 2 with equal probability-. This provides a family of metrics on H(Z) indexed by a probability space, namely ({1, 2} H(Z) S , P) where P is the product measure. What can be said of the sequence of pointed metric spaces (G, e, d ω /n) for a "typical ω"?
1.2. An asymptotic shape theorem for Heisenberg's group. Before stating our precise result, let us describe our general set up. We consider a connected nonoriented graph X, whose set of vertices (resp. edges) is denoted by V (resp. E). For every function ω : E → (0, ∞), we equip V with the weighted graph metric d ω , where each edge e has weight ω(e). In other words, for Observe that the simplicial metric on V corresponds to the case where ω is constant equal to 1, we shall simply denote it by d. We will now consider a probability measure on the set of all weight functions ω. We let ν be a probability measure supported on [0, ∞). Our model consists in choosing independently at random the weights ω(e) according to ν. More formally, we equip the space Ω = [0, ∞) E with the product probability that we denote by P.
A central result in first passage percolation is the following Gaussian concentration inequality due to Talagrand. Theorem 1.1. [Ta95, Proposition 8.3]). Suppose that ω(e) has an exponential moment: i.e. there exists c > 0 such that E exp(cω(e)) < ∞. Then there exists C 1 and C 2 such that for every graph X = (V, E), for every pair of vertices x, y, and for every 0 ≤ u ≤ d(x, y), , u .
In the sequel we shall make the following assumptions on ν.
• (A2) We also suppose that there exists a > 0 such thatd(x, y) ≥ ad(x, y) for all x, y ∈ V.
It turns out that the second condition is fulfilled provided that ν({0}) < 1/k, where k is an upper bound on the degree of the graph [Te14, Corollary A2]. On the other hand, one hasd(x, y) ≤ bd(x, y), where b = E(ω e ). It follows that under condition (A2), d andd are bi-Lipschitz equivalent.
In what follows, we consider as a subset of H(R) equipped with the leftinvariant Carnot-Caratheodory metric d cc obtained as the limit of (G, d S , e) in Pansu's theorem. We let B cc (g, r) denote the ball of radius r centred at g for this metric. Recall that the Hausdorff distance between two compact subsets A and B of H(R) is defined as where [A] r denotes the set of points of H(R) at distance at most r from A. Observe that here, [A] r = AB cc (0, r), where we adopt the notation The following theorem is the analogue of the shape theorem in Z d [CD81,Ke86]. Our main result is an extension of this theorem to FPP.
Theorem 1.4. (Asymptotic shape theorem for nilpotent groups) Assume that G is virtually nilpotent and we let (N R , d cc ) be the limit of (G, d/n, 1 G ) as in Theorem 1.3. We assume (A 1 ) and (A 2 ) are satisfied. There exists a left-invariant Carnot-Caratheodory metric d ′ cc on N R , which is bi-Lipschitz equivalent to d cc such that for a.e. ω ∈ Ω, (G, d ω /n, 1 G ) converges in the pointed Gromov-Hausdorff topology to As in the case of the Heisenberg group discussed above, one can explicitly produce (both for Theorem 1.3 and Theorem 1.4) a sequence of Gromov-Hausdorff approximations involving a 1-parameter subgroup of dilation of N R , and accordingly deduce an asymptotic shape theorem of the same flavour. The precise statements can be found in [Pa] or [B] in the deterministic setting, and it is straightforward to deduce from our proof of Theorem 1.4 the corresponding statements for first passage percolation.
The proof of Theorem 1.4 goes in two steps: first we use Theorem 1.1 to show that the identity (G, d ω /n, 1 G ) → (G,d/n, 1 G ), whered = Ed ω is almost surely a sequence of Gromov-Hausdorff approximation. This step is completely general: the only geometric property that is used is the fact that the volume of balls in (G, S) grows subexponentially (see §3). The second step consists in showing thatd is sufficiently close to being geodesic to apply Pansu's theorem to the sequence (G,d/n, 1 G ).
We point out that the proof of Theorem 1.4 (except for Pansu's theorem) is essentially contained in [Te14].
1.4. Asymptotic cone of FPP on graphs with bounded degree. The first condition to obtain a limit shape theorem is to have relative compactness for the Gromov-Hausdorff topology, which restricts our investigations to graphs with polynomial growth. In order to treat more general situations, one needs the notion of asymptotic cone, which is some way to force the scaling limit to exist (using some non-principal ultrafilter). These notions are recalled in §3. One can then prove a very general result which in some sense is a far-reaching generalization of the phenomenon observed in Theorem 1.4. Theorem 1.5. Let X = (V, E) be a graph with degree at most k ∈ N, let o n be a sequence of vertices, r n ∈ N be an increasing sequence, and let η be a non-principal ultrafilter. We assume that ν is supported on [a, b], with 0 < a < b < ∞ < and that ν({a}) < 1/k. Then "the asymptotic cone is almost surely deterministic", i.e. for a.e. ω, if and only if for every ε > 0, Saying that the asymptotic cone is almost surely deterministic amounts to saying that the fluctuations of the metric in the ball of radius r are almost surely "sublinear", i.e. in o(r). For those who do not like ultrafilters and asymptotic cones, we recommend to read the statements of Propositions 1.5 and 3.5 which are written in terms of fluctuations.
Theorem 1.5 is the combination of two independent statements: one dealing with the subexponential growth case, and one with the exponential growth case (see Remark 3.8). The first statement (Corollary 3.2) is a consequence of Talagrand's Theorem 1.1, while the second one (Corollary 3.6) is completely elementary. The conclusion of Corollary 3.6 is actually stronger than the statement of Theorem 1.5: roughly speaking it says that the ω-distance in the ball B(o n , r n ) a.s. admits fluctuations of size of the order of r n about the average distance. Note that we do not know whether this remains true for the distance to the origin.
1.5. Sublinear upper bound on the variance. A straightforward and wellknown consequence of Theorem 1.1 is a linear bound on the variance var(d ω (x, y)) = O(d(x, y)) valid for any graph, and sharp for Z (Kesten first proved it for FPP on Z d using martingales [Ke93]). In [BKS03], the authors manage to improve this linear bound on Z d , for d ≥ 2: .
To be more precise, they prove it under the assumption that ν({a}) = ν({b}) = 1/2. However, in [BR07,Theorem 4.4], the same result is proved under much more general assumptions on ν (including e.g. exponential laws). In a subsequent paper, these authors prove a concentration inequality as well [BR08, Theorem 5.4]. All these results rely on the same geometric trick from [BKS03]. Therefore they can all be generalized to the setting of Theorem 1.6 below. However, to simplify the exposition and avoid useless repetitions, we shall only focus on the original statement of [BKS03].
Theorem 1.6. Assume that ν({a}) = ν({b}) = 1/2 (or more generally the assumptions of [BR07, Theorem 4.4]) and consider FPP on some Cayley graph (G, S). Assume that G has a finite index subgroup G ′ < G whose center Z(G ′ ) satisfies the following property: there exists δ > 1 and c > 0 such that for all n Then there exists C > 0 such that for all x, y ∈ G, one has d(x, y)) .
Let us examine the case of the Heisenberg group: its center is isomorphic to the cyclic subgroup generated by the matrix Note that [a k , b n ] = a −k b −n a k b n = c nk , from which one easily deduces that |Z(H(Z))| ∩ B S (e, n)| ≥ αn 2 for some constant α > 0. Therefore the previous theorem applies to H(Z). More generally, it is well-known (see e.g. [Gui73]) that non-virtually abelian nilpotent groups satisfy (1.1) with some δ ≥ 2. So Theorem 1.6 applies to Cayley graphs of virtually nilpotent groups which are not virtually isomorphic to Z.
Asymptotic shape theorem for FPP nilpotent groups
Theorem 1.4 relies on the following proposition (which is [Te14, Proposition 1.2]).
Proposition 2.1. (Fluctuations about the average distance)
We let X : (V, E) be a graph, and we assume that there exists C > 0 and d > 0 for all o ∈ V and all r > 0, |B(o, r)| ≤ Cr d . We assume (A 1 ) and (A 2 ) are satisfied. Then there exists C ′ > 0 such that for a.e. ω, there exists r 0 such that for r ≥ n 0 , We deduce from the previous proposition that there exists C ′ such that for a.e. ω there exists n 0 such that for all n ≥ n 0 , one has Corollary 2.2. Under the assumptions of Proposition 2.1, there exists a measurable subset of full measure Ω ′ ⊂ Ω such that for all ε > 0, and all ω ∈ Ω ′ , there exists r 0 such that one hasB for all r ≥ r 0 , and all o ∈ V.
In order to prove Theorem 1.4, it is therefore enough to show the following result. This relies on the following proposition 1 .
Theorem 2.3 is now a direct consequence of Pansu's theorem [Pa], which holds for inner left-invariant metrics on G.
3. Asymptotic cones of FPP on graphs with bounded degree 3.1. Graphs with subexponential growth: "fluctuations vanish in the asymptotic cone". We have the following consequence of Talagrand's concentration inequality. There exists a measurable subset of full measure Ω ′ ⊂ Ω such that for all ε > 0, and all ω ∈ Ω ′ , there exists n 0 = n 0 (ω, ε) such that for all n ≥ n 0 , for all x, y ∈ B(o n , r n /ε), one has |d ω (x, y) −d(x, y)| ≤ εr n .
Let 0 < ε ≤ 1. Applying Talagrand's theorem, we obtain that for all pairs of vertices x, y ∈ B(o, r n /ε), The actual statement of [Te14, Proposition 1.5] is that the metric is strongly asymptotically geodesic. 2 This property is called asymptotic geodesicity in [B].
Let ε k be a sequence converging to 0. For all n 0 , k ∈ N, we let Ω n 0 ,k be the event that for all n ≥ n 0 , and for all x, y ∈ B(o n , r n /ε k ) such that d(x, y) ≥ ε k r n one has Clearly, it is enough to show that Ω ′ = k n 0 Ω n 0 ,k , has full measure. This is equivalent to showing that for every k ∈ N, n 0 Ω n 0 ,k has full measure. For this, it is enough to show that for every k ∈ N, P(Ω n 0 ,k ) → 1 as n 0 → ∞. Observe that 1 − P(Ω n 0 ,k ) is the probability that there exists n ≥ n 0 such that (3.3) is not satisfied for some x, y ∈ B(e, r n /ε k ) such that d(x, y) ≥ ε k n. It follows from Lemma 3.4 that for n 0 = n 0 (k) large enough, which tends to zero as n 0 → ∞ by (3.2).
Let us mention an immediate consequence of Proposition 3.1. We let η be a non-principal ultrafilter on N. Recall that given a sequence of pointed metric spaces (X n , d n , o n ), one can define its ultra-limit according to η as follows: where (x n ) ∼ (y n ) if and only if lim η d n (x n , y n ) = 0. The distance on X η is defined by the formula d η ((x n ), (y n )) := lim η d(x n , y n ). Now, given a fixed metric space X, a sequence of points o n , a sequence r n → ∞, and a non-principal ultrafilter η, we call asymptotic cone (with respect to these data) the ultralimit lim η (X, d/r n , o n ).
Corollary 3.2. Under the assumption of Proposition 3.1, there exists a measurable
subset Ω ′ of full measure such that for all ω ∈ Ω ′ and all non-principal ultrafilter η, In other words, "the asymptotic cone is almost surely deterministic".
As a special case of the previous corollary, we deduce that the asymptotic cone of FPP on a Cayley graph with subexponential growth is almost surely deterministic.
It is important to make a clear distinction between the strong statement of Corollary 3.2, and the following much weaker one, which is true on any graph. Proof. This follows from the following lemma, which is an immediate consequence of Talagrand's inequality.
Lemma 3.4. Let X = (V, E) be a graph, and let o n be a sequence of vertices.
We assume (A 1 ) and (A 2 ) are satisfied. There exists constants C ′ 1 , C ′ 2 such that the following holds. Let r n ≥ 0 be a increasing sequence of integers, and let ε > 0. Then for all x n , y n ∈ B(o n , r n /ε). the probability that |d ω (x n , y n ) −d(x n , y n )| ≤ εr n , is at least 1 − C ′ 1 exp −C ′ 2 ε 3 r n . 3.2. Graphs with exponential growth: "fluctuations remain non-trivial in the asymptotic cone". In this subsection, we shall make the assumption that the minimal interval containing the support of ν is of the form [a, b] with 0 ≤ a < b < ∞.
Note that for first-passage percolation on the r-regular tree for r ≥ 3, it is easy to see that the asymptotic cone of FPP is not deterministic. We can use the fact that the random distance between two vertices x, y in the tree is only determined by the edges along the unique geodesic between them: this distance is therefore the sum of n := d(x, y) independent random variables. The average distanced(x, y) is equal to cd(x, y), where c ∈ (a, b) is the expected length of a given edge. The probability that d ω (x, y) is less than -say (a + c)d(x, y)/2 (resp. more than (c + b)d(x, y)/2) decays (at most) exponentially with n. On the other hand, there are at least exponentially many pairs of disjoint geodesics of length n in a ball of radius kn, for k ≥ 2 (the exponential exponent can be made as large as we want by increasing k): for instance, for all x in the sphere of radius (k − 1)n, pick a geodesic joining x to a point of the sphere of radius kn. It follows for a.e. ω, one can find in the asymptotic cone a pair of distinct points whose ω-distance is strictly less (or strictly larger) than the average distance.
The same argument adapts to non-elementary hyperbolic groups. To generalize the previous argument, one uses the fact that there exists C > 0 such that for all ω and for every pair of points x, y, there is a geodesic (say for the word metric) γ between x and y whose C-neighborhood contains any d ω -geodesic between x and y. To conclude that there exist fluctuations of linear size (both above and below the average distance), one needs to produce exponentially many "independent" pairs of points at distance n in a ball of radius n: this follows for instance by considering a quasi-isometrically embedded 3-regular tree.
For general graphs of exponential growth (even Cayley graphs), we do not know whether it is possible to exhibit fluctuations above the average distance in the asymptotic cone. However, it is possible to show that it always has fluctuations below the average distance: More precisely, the following proposition says that if the growth is exponential, then a.s. one can find in the asymptotic cone pairs of distinct points whose ω-distance are "as close as possible to the minimal possible distance a(dx, y)". Provided that the average distance is bounded away from this minimal distance (see Lemma 3.7), this implies that FPP admits "random fluctuations of linear size", which are visible in the asymptotic cone.
Before proving this proposition, let us restate it in terms of asymptotic cones.
Corollary 3.6. Let X = (V, E) be a (not necessarily connected) graph with degree at most q, let o n be a sequence of vertices and let η be a non-principal ultrafilter. Assume that there exists an increasing sequence r n ∈ N such that log |B(o n , r n )| ≥ cr n for some constant c > 0. Then there exists a measurable subset Ω" of full measure with the following properties. For all ω ∈ Ω", for all ε > 0, there exists α > 0 and x n , y n ∈ V such that d(o n , x n ) = O(r n ) and d(o n , y n ) = O(r n ) and such that, Moreover, if ν({a}) > 0, then one can take ε = 0.
Proof. Note that since the degree of X is bounded, there exists C such that Let λ = c/2C, so that |B(o n , λr n |) ≤ e cr n /2 . We now consider a subset X n of B(o n , r n ) whose points are pairwise at distance at least (c/4C)r n apart and which is maximal for this property. It follows that from which we deduce that |B(o n , r n )| ≤ |X n |e cr n /2 .
Thus we deduce that |X n | ≥ e cr n /2 .
We let λ > 0 to be determined later and let k n = [λ(c/8C)r n ]. Observe that the balls B(x, k n ), for x ∈ X n are pairwise disjoint. So one can pick for every x ∈ X n a point y x at distance k n from x, and a geodesic γ x between them. The probability that all edges of γ x have ω-length at most (1+ε)a is at least ν([a, a(1+ε)]) k n . Since the paths γ x are disjoint, these events are independent, so that the probability that one of them has ω-length at most (a + ε)k n is at least Recall that given two sequences such that u n → 0 and v n → ∞, one has (1 − u n ) v n ≤ exp(−u n v n ). On the other hand, by taking λ small enough (depending on ε, unless ν({a}) > 0), one can ensure that e cr n /2 ν([a, a(1 + ε)]) λ(c/8C)r n ≥ exp(c ′ n) for some c ′ > 0. Therefore for this choice of λ, the above probability tends to 1 as n tends to infinity very quickly (in particular the probability of the complement event is summable). This ensures the existence of a measurable subset of full measure Ω" such that for all ω ∈ Ω", there is a sequence x n ∈ X n such that for n large enough, d ω (y x n , x n ) ≤ (a + ε)k ′ n = a(1 + ε)d(y x n , x n ). This proves the first part of the proposition with y n = y x n .
To finish the proof of Theorem 1.5, we need the following lemma.
Lemma 3.7. [Te14, Lemma A.1] Let X = (V, E) be a graph of degree ≤ q. Assume that ν is supported on the interval [a, ∞) and that ν({a}) < 1/q. Then there exists a ′ > a and r 0 ≥ 0, such thatd(x, y) ≥ a ′ d(x, y) for all x, y ∈ V such that d(x, y) ≥ r 0 .
Remark 3.8. To conclude the proof of Theorem 1.5, let us remark that in the proof of Corollary 3.2 (resp. in Corollary 3.6) the condition log |B(o n , r n )| = o(r n ) (resp. log |B(o n , r n )| ≥ cr n ) only needs to hold η-almost surely.
Upper bound on the variance
The proof of Theorem 1.6 is a simple generalization of the proof of [BKS03, Theorem 1] (which deals with the case of Z d , d ≥ 2). We shall sketch its proof, following the same order as in [BKS03], but only providing justifications when the argument needs to be adapted to our more general setting. To simplify the exposition, we shall assume that δ ≥ 2. In this section, we will denote 1 G for the neutral element of G, keeping the letter e for the edges. Remember, since this will play a crucial role in this proof that the graph structure on (G, S) is defined by saying that two elements (i.e. vertices) g and g ′ and joined by an edge if there exists s ∈ S such that g ′ = gs ±1 . Hence, the action by left-translations of G on itself preserves the graph structure and thus the metric.
Following [BKS03], let us fix g ∈ G, and consider the random variable f (ω) := |g| ω , where |g| ω denotes the ω-distance from the neutral element to g. We shall also denote |g| = d(1 G , g), where d is the word metric on G. For every ω, we pick some ω-geodesic γ from 1 G to g. For every ω and every edge e ∈ E we denote σ e ω the configuration which is different from ω only in the e-coordinate. We start remarking that 1,1 , . . . , x 1,m ) .
We let C ≥ 1 be large enough so that |Z(G ′ ) ∩ B S (1 G , Cm)| ≥ m 2 , and we pick some injective map from j : {1, . . . , The first important estimate from [BKS03] is If z commutes with g, as d(1 G , z) = d(g, gz) ≤ Cm, we deduce by triangular inequality that | f −f | ≤ 2bCm, which implies (4.2). Otherwise, one needs that gz = z ′ g, for some z ′ ∈ Z(G ′ ) such that d S (e, z ′ ) = O(m). This is guarantied by the following lemma, after noticing that up to replacing G ′ with the intersection of all its conjugates, we can assume that G ′ is a characteristic subgroup of G, whose center is therefore normal in G.
Lemma 4.1. Assume G ′ is characteristic. There exists some constant C, such that for all g ∈ G and z ∈ Z(G ′ ), d S (e, gzg −1 ) ≤ Cd S (e, z).
Proof. Note that the action by conjugation of G on Z(G ′ ) factors through G/G ′ which is finite. Let F ⊂ G be a set of representatives of G/G ′ , and let C = max g∈F,s∈S d S (e, gsg −1 ). Let z ∈ Z(G ′ ) of length n, and let z = s 1 . . . , s n , where s i ∈ S. Given g ∈ G, there exists h ∈ F such that gzg −1 = hzh −1 . Thus we have so the lemma follows by triangular inequality. Define Then one needs to show that (1 G , g)).
The rest of the proof is identical to [BKS03] so we will not repeat it. Note that if the pair (x, ω) ∈Ω satisfiesf (x, σ e ω) >f (x, ω), then e must belong to every geodesic between z and zg, so in particular it belongs to zγ. So we get Let Q be the set of edges e ′ such that P(z −1 e = e ′ ) > 0.
Remarks and questions
5.1. More general distributions. It would be interesting to investigate whether our results survive to non-trivial correlations between edges lengths. Note that in some sense, Talagrand's exponential concentration estimate is far too strong for Theorem 1.4: actually a polynomial decay with a large exponent would be enough to beat the (polynomial) growth rate of the group. This suggests that one should be able to use a weaker moment condition, and possibly allowing some weak correlations. For groups, one can consider a different type of generalization: given an ergodic G-probability space (Ω, P), an invariant random metric (IRM) on G is a measurable map G × G × Ω → R + , (g, h, ω) → d ω (g, h), such that for a.e. ω ∈ Ω, d ω (·, ·) is a distance on G, and that satisfies the equivariance condition: for a.e. ω, and all g, h 1 , h 2 ∈ G, d gω (gh 1 , gh 2 ) = d ω (h 1 , h 2 ).
Clearly FPP is a special case of IRM, where the space Ω is [a, b] E equipped with the product probability. Observe that in this case, the action of G on Ω, induced by its action of E, is ergodic (actually even mixing). One may wonder under what conditions on an IRM is the asymptotic cone of (G, d ω , e) almost surely deterministic. In the special case of virtually nilpotent groups, one may ask whether (G, d ω , e) converges in the pointed Gromov-Hausdorff topology to a connected Lie group equipped with an invariant Carnot-Caratheodory metric. Classical proofs of the limit shape theorem for Z d are based on the subadditive ergodic theorem, which allows to treat very general IRM (see [Bj10] for the most general known statement). Unfortunately, we were not able to exploit the subadditive ergodic theorem for non-virtually abelian nilpotent groups: this only gives us that distances along certain "horizontal" directions are asymptotically deterministic, but for instance in the case of Heisenberg, it is not clear under what conditions distances in the direction of the center do not have large fluctuations.
Let us end with a last remark. Recall that the proof of Theorem 1.4 splits into two independent parts: one consists in proving a concentration phenomenon, namely that the identity map (G, d ω /n, e) → (G,d/n, e) induces a sequence of Gromov-Hausdorff approximations (recall thatd = Ed ω ). This might remain true under very general assumptions on d ω , and in particular it may not require d ω to be geodesic, not even in a weak sense. This contrasts with the second step, consisting in proving that (G,d/n, e) converges, which does required to be asymptotically geodesic: indeed, conversely, if (G,d/n, e) converges to some geodesic metric space, thend must be asymptotically geodesic. On the other hand one can exhibit invariant metrics on the Heisenberg group which are not asymptotically geodesic and yet quasi-isometric to the word metric. Moreover such a metric d can be chosen so that (G, d/n, e) does not converge at all [C11, Remark A.6.]. 5.2. Sublinear variance. The proof of the sublinear estimate on the variance (Theorem 1.6) uses the fact that the group has a large center. By contrast, we know that for Z, or more generally on a tree, the variance grows linearly (this can easily be extended to Gromov-hyperbolic graphs). We suspect that -at least in the context of Cayley graph-the fact that the variance is sublinear might be related to the fact that no asymptotic cone has cut points (a cut point has the property that when we remove it, the space becomes disconnected). We propose the following more modest conjecture Conjecture 5.1. Suppose G is the direct product of two infinite finitely generated groups, then (1.2) is satisfied for all Cayley graphs of G.
A particularly interesting case is the direct product of the 3-regular tree T with Z: in this case, [BM13] have managed to prove that E(|d ω −d|) is tight in the Z-direction. There is some reason to believe that in the T-direction the variance should behave as for Z 2 (since geodesics are likely to remain at bounded distance from the direct product of a geodesic in T times Z). Overall, the variance should be even smaller for T × Z than for Z 2 , where it is classically conjectured to be of the order of n 2/3 (we refer to [BKS03] and [GK12] for a more detailed discussion concerning Z 2 ). Another interesting example is the product of two 3-regular trees, for which no sublinear estimate is known at the moment.
5.3. RWRE on virtually nilpotent Cayley graphs. The FPP shape theorem and the rate of convergence are statements regarding large scale metric homogenization of local random metric perturbations. Similarly to the path we took here for FPP, it is of interest to consider the random walk, heat kernel and Green functions homogenization in the context of virtually nilpotent Cayley graphs, extending the work from lattices in Euclidean spaces, studied in PDE under the name of homogenization and in probability theory under the name RWRE (random walk in random environment) see e.g. [Ba]. | 8,111.6 | 2014-10-13T00:00:00.000 | [
"Mathematics"
] |
Simulation Research of Magnetically-coupled Resonant Wireless Power Transfer System with Single Intermediate Coil Resonator Based on S Parameters Using ANSYS
ANSYS can be a powerful tool to simulate the process of energy exchange in magnetically-coupled resonant wireless power transfer system. In this work, the MCR-WPT system with single intermediate coil resonator is simulated and researched based on scattering parameters using ANSYS Electromagnetics. The change rule of power transfer efficiency is reflected intuitively through the scattering parameters. A new method of calculating the coupling coefficient is proposed. A cascaded 2-port network model using scattering parameters is adopted to research the efficiency of transmission. By changing the relative position and the number of turns of the intermediate coil, we find some factors affecting the efficiency of transmission. Methods and principles of designing the MCR-WPT system with single intermediate coil resonator are obtained. And these methods have practical value with design and optimization of system efficiency.
Introduction
Wireless power transfer (WPT) has become a research interest in recent years. It was reported by Tesla a century ago [1] , and offers the promise of cutting the last wire. It has been or will be used in costly, inconvenient, dangerous and even some occasions that cannot use wired transmission, such as implantable ventricular assistant devices [2] or for acquiring power from offshore wind driven generators and outer space solar power station [3] .
In 2007, the Massachusetts Institute of Technology (MIT) has proposed a new scheme based on strongly coupled magnetic resonances, thus presenting a key breakthrough for mid-range wireless energy transmission [4] . Although the coupled mode theory can explain the process of energy exchange well [5] , it can only apply to the conditions with small perturbations, and it is too hard for common electrical engineers to understand or use. So its application has been greatly restricted; on the other hand, measuring the parameters in the coupled mode theory is very difficult. In this work, we choose the ANSYS Electromagnetics, which is one of the most powerful tools to simulate the process of energy exchange in the MRC-WPT system as well as to calculate the parameters in it, to study the WPT system with single intermediate coil resonator. From an electrical engineering perspective, we derive curves for coupling coefficient versus the gap length between the sending coil and receiving coil using ANSYS Hfss and the curve fitting tool of MATLAB, thus compare it with the result calculated by Neumann formula. Based on the above work, we research the influence of single intermediate coil with different number of turns on transmission efficiency. Paper [6] has researched single intermediate coil resonator's impacts on transmission efficiency with different positions, but it ignores the geometrical factors of the intermediate coil. Paper [7] has researched multiple intermediate resonant coils' influence on transmission efficiency, but it uses the coupled mode theory that many electrical engineers are not familiar with. By introducing the efficiency of transmission, a cascaded 2-port network model using S parameters is adopted, which covers the principle of designing the single intermediate coil resonator and also avoids the blindness in designing the intermediate coil resonator.
Simulation Model of Non-intermediate System
As shown in Fig 1, the magnetic resonance coupling wireless power transfer system with non-intermediate resonant coil is composed of power supply, excitation coil, sending coil, receiving coil, load coil the load and for each coil there is one compensating capacitor installed in series. In this paper, only the efficiency of transferring energy from sending coil (labelled Coil 1) to receiving coil (labelled Coil 2) is considered, and for sake of simplicity, we suppose the load is a constant low resistive load which has no influence on the transmission efficiency analysis because each variable capacitor can tune the system to 2.8MHz that ensures all the coils are in resonance. The coil type in this analysis is planar spiral coil, and the number of turns of both sending coil and receiving coil is 7. The material of the coil is set to copper. Then the 3D model of the two coils is built in ANSYS and Fig 2 shows the model. The skin effect and proximity effect can be ignored in calculating the inductance for the following three reasons: firstly, the frequency of the system is far lower than the very high frequency (VHF); secondly, the wire diameter is not very thick; thirdly, the wire section is a uniform circle [8] .
Calculating the Inductance and the Capacitor Value
The inductance calculation of the coil is very important in the simulation, and the empirical formula for calculating the inductance of the planar spiral coil is given by equation (1) and the substitution is in equation (2), which indicates the relationship between the inductance value and its geometric size. The unit of the coil's size is inch and the unit of L (inductance) is μH. In order to make the meaning of the formulas more clear, all the coil sizes are labelled in Fig 3, which is the cutaway view of a common four-turn coil. 2 2 In equation (1) and equation (2), W, S, DI and N represent the wire diameter, the distance between two turns, inner diameter of the coil and the number of turns of the coil, respectively. In this paper, experimental parameters W=11.8, N=7, W=0.6, S=0.31, and the unit "inch" is for all the coil sizes. The theoretical result, as obtained from equation (1) and (2), and the simulation result, as obtained from ANSYS simulation, are shown and compared in It can be seen from Fig.4 that the two results have only minor differences in calculating the inductance. But the method of simulation will be better than that using theoretical formula when the shape of the coil is irregular.
The simulation values of the Coil 1 and Coil 2 are 27.3μH and 27.1μH, respectively.
When the two coils are in resonance, the value of the compensating capacitor can be given by equation (3), where f represents the resonant frequency.
The value of the capacitor for each coil 118.348 pF and 119.22 pF. And the characteristic impedance of each coil is set to 50 ohm for better impedance matching.
Calculating the Coupling Coefficient between Resonators
The scattering parameters are useful in analyzing the forward gain of the mid-range wireless systems. As shown in Fig 5, the relationship of the incident and reflected waves can be defined by equation (4), 1 1 1 1 2 1 where a1 and a2 are the incident power waves on Port-1 and Port-2, respectively; b1 and b2 are the reflected power waves on Port-1 and Port-2, respectively [9] . The forward transmission coefficient S21 is defined as It can be seen from equation (5) that the S21 parameter can be an indicator for transmission performance of midrange wireless power systems. ANSYS Hfss(High frequency structure simulator) can calculate the S parameters easily when the boundaries and excitations are arranged properly. Fig.6 and Fig.7 show the amplitude and phase curves of S21, respectively. And the solid lines with different markers stand for the S21 parameters in different gap length between the two coils. It can be seen from Fig.6 that there are two resonant peaks when the gap between the two coils varying from 10 centimeters to 35 centimeters with the step 5 centimeters. This phenomenon is so called "frequency splitting" what has been explained in great detail in paper [10] , and two coils are said to be over-coupled. And when the gap is 40 centimeters, the two resonant peaks combine with each other and the system is said to be critical coupled.
Paper [11] has proposed a formula to calculate the coupling coefficient between resonators based on S parameters when the two resonators are over coupled, and it is given by equation (6).
where f1, f2 and f0 represent the frequency of the left peak, the right peak and the minimum point in Fig.6, respectively and f'1, f'2, and f'0 represent the frequency of the 180° jump point, the +90° point and the zerocrossing point in Fig.7, respectively. But with increasing distance, the two resonators will be critical coupled and even under coupled, thus the equation (6) will fail to finish the calculation. In this condition, a fitting expression like equation (7) is proposed. And a, b, c are deduced by the coupling coefficient (D=0.35m, 0.3m, 0.25m, 0.2m) obtained from equation (6) ( ) b f x ax c (7) Another method of calculating coupling coefficient of the two resonators is to use Neumann formula (8) and equation (9).
In equation (8), M and D represent the mutual inductance and the gap between the two coils, respectively. L1 and L2 represent the self-inductance of Coil 1 and Coil 2. Using equation (8) and equation (9) we can derive equation (10), where r is the average radius of the coils, and r=0.242m in this paper. Fig 8 shows and compares the results of coupling coefficient k versus gap length between the two coils using different methods. The above simulation and calculation show that the results obtained from equation (6) is similar to that obtained from Neumann formula when the two coils are over coupled and the results obtained from the fitting expression (7) is similar to the one obtained from Neumann formula when the two coils are critical coupled or under coupled ( > @ 0.35 , 0.8 D m m ). Paper [12] uses the Method of Moments (MoM) to calculate coupling coefficient k whose results are the same as the one obtained from Neumann formula. All the above methods are valid, and our method is maneuverable and suitable for engineering applications when the system is critical coupled or under coupled while using Neumann formula or MoM needs lots of mathematical techniques.
Eficiency Analysis of WPT
According to the microwave theory, the relationship between S parameters and efficiency of transmission is given by equation (11).
It can be seen from Fig.6 and equation (11) that efficiency of transmission will decrease rapidly when the gap length between the two coils increases. In order to solve this problem, wireless power transfer system with single intermediate coil resonator was proposed in paper [13] .
When the gap length is 100 centimeters, the efficiency of transmission is lower than 8% and the coupling coefficient k12 is so tiny that can be ignored. And in this condition, one intermediate coil resonator (labelled Coil 3) is placed in the middle of two coils.
Efficiency of transmission of the system with three coils can be deduced from a cascaded 2-port network model, what is showed in Fig 9. T parameters of the cascaded 2-port network model can be given by equation (12), ' T T T (12) where T is the parameters of the first 2-port network N1 and ' T is the parameters of the second one N2. And the relationship of S parameters and T parameters can be given by equation (13), and equation (14). 11 12 11
Transverse Offset's Impact on Efficiency of Transmission
The position of intermediate coil resonator's transverse offset is shown in Fig 10. And all the three coils have the same number of turns 7.
Axial Offset's Impact on Efficiency of Transmission
The position of intermediate coil resonator's axial offset is shown in Fig 12. And all the three coils have the same number of turns 7.
Efficiency of Intermediate Coil in Different Turns
In this simulation, the coil type of Coil 3 is the planar spiral coil and is also placed in the middle of Coil 1 and Coil 2. The number of turns varies from 5 to 9 whose inductance value has been calculated in section . With the increase in the number of turns, the inductance and radius of Coil 3 rise. As shown in Fig.14, higher efficiency of transmission can be obtained from increasing the number of turns of the intermediate coil.
However, in the actual situation, radius of intermediate coil cannot be very large. And when the radius reach to some of extent, the resistance will be very large which can suppress the transmission efficiency [14] .
CONCLUSION
The self-inductance of coils can be calculated accurately in ANSYS Electromagnetics. And the method of calculating coupling coefficient using scattering parameters is developed, and thus a fitting expression like f(x)=ax b +c is proposed to calculate the coupling coefficient. When the two coils are critical coupled or under coupled, this method is easy and useful in engineering application. Based on the above work, magnetically-coupled resonant wireless power transfer system with single intermediate coil resonator is simulated and researched using scattering parameters. A cascaded 2-port network model is used to explain the above system. Simulations and calculations show that transmission efficiency of the system is relative to intermediate coil's axial offset while intermediate coil's small transverse offset has minimal difference on the efficiency of transmission. When designing the system, we can call for the powerful tool--ANSYS Electromagnetics to determine the proper number of turns and radius of intermediate coil. | 3,025.4 | 2016-01-01T00:00:00.000 | [
"Physics"
] |
Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas
Abstract The presence of contrast enhancement (CE) on magnetic resonance (MR) imaging is conventionally regarded as an indicator for tumor malignancy. However, the biological behaviors and molecular mechanism of enhanced tumor are not well illustrated. The aim of this study was to investigate the molecular profiles associated with anaplastic gliomas (AGs) presenting CE on postcontrast T1‐weighted MR imaging. In this retrospective database study, RNA sequencing and MR imaging data of 91 AGs from the Cancer Genome Atlas (TCGA) and 64 from the Chinese Glioma Genome Atlas (CGGA) were collected. Gene set enrichment analysis (GSEA), significant analysis of microarray, generalized linear models, and Least absolute shrinkage and selection operator algorithm were used to explore radiogenomic and prognostic signatures of AG patients. GSEA indicated that angiogenesis and epithelial‐mesenchymal transition were significantly associated with post‐CE. Genes driving immune system response, cell proliferation, and focal adhesions were also significantly enriched. Gene ontology of 237 differential genes indicated consistent results. A 48‐gene signature for CE was identified in TCGA and validated in CGGA dataset (area under the curve = 0.9787). Furthermore, seven genes derived from the CE‐specific signature could stratify AG patients into two subgroups based on overall survival time according to corresponding risk score. Comprehensive analysis of post‐CE and genomic characteristics leads to a better understanding of radiology‐pathology correlations. Our gene signature helps interpret the occurrence of radiological traits and predict clinical outcomes. Additionally, we found nine prognostic quantitative radiomic features of CE and investigated the underlying biological processes of them.
| INTRODUCTION
Gliomas are both the most common and lethal tumors of the central nervous system. Magnetic resonance (MR) imaging, an indispensable approach to tumor diagnosis and treatment monitoring, identifies tumor-specific behaviors and malignancy; 1,2 contrast enhancement (CE) seen on MR imaging is indicative of blood-brain barrier disruption and tumor cells infiltration. Abnormalities in the focal blood-brain barrier can lead to the leakage of contrast reagents, which results in an enhancement on T1-weighted images. Moreover, CE has been positively correlated with tumor malignancy and unfavorable prognosis. Almost 90% of glioblastoma (GBMs; World Health Organization [WHO] grade IV) are reportedly enhanced after contrast administration, with a corresponding overall survival time of 14.4 months. Meanwhile, the enhancement ratio and overall survival for low-grade gliomas (WHO grade II) are only 10% and 78.1 months, respectively. 3,4 Multi-omics studies have greatly increased our insight into relationships between genetic alterations and radiographic imaging phenotypes, and a new research field named "radiogenomics" was generated. 5 A previous study revealed that 1p/19q-codeleted and CE anaplastic oligodendrogliomas present larger tumor volumes, chromosome 9p and CDKN2A loss, genomic instability, and expression of angiogenesisrelated genes. 1 Another radiogenmic study identified significant imaging correlations for six driver genes both in regions of enhancement and nonenhancing parenchyma. 6 However, an integrative radiogenomic analysis for clarifying molecular pathways corresponding to CEs in brain tumors have not been conducted yet.
In the present study, we investigated the specific genetic alterations associated with anaplastic gliomas (AGs, WHO grade III) presenting with CE on postcontrast T1weighted MR images. Unlike GBM and low-grade glioma, 62%-80% of AG patients present with CE, making them suitable subjects to explore radiogenomic associations. 1,7 Both whole transcriptome sequencing data and postcontrast T1-weighted MR images from the Cancer Genome Atlas (TCGA) were analyzed to explore differentially expressed genes and determine a CE-related signature. Data from the Chinese Glioma Genome Atlas (CGGA) were utilized to validate the derived signature diagnostically and prognostically. The prognostic value of quantitative radiomic features of CE was also preliminarily investigated in this study.
| Patients and samples
Ninety-one patients (49 men; median age, 45 years; range, 22-75 years; and 42 women, median age, 50 years; range, 22-74 years) diagnosed with AG were extracted from TCGA database (http://cancergenome.nih.gov) and comprised the training set ( Figure S1). Additionally, clinical characteristics of 64 cases (40 men; median age, 42 years; range, 20-70 years; and 24 women, median age, 44.5 years; range, 18-67 years) diagnosed with AG were obtained from the CGGA database (http://www.cgga.org.cn) and were deemed the validation set. Only those cases with both RNAsequencing data and MR imaging data were included in this retrospective study. The study was approved by our institutional review board.
| Image acquisition and evaluation
The Cancer Genome Atlas MR images of AGs were downloaded from the Cancer Imaging Archive (TCIA, http:// www.cancerimagingarchive.net). CGGA MR images of AGs were obtained from the CGGA imaging database (http://www.cgga.org.cn) administered by the Chinese Glioma Cooperation Group. MR images in CGGA patients were generally obtained with a Trio 3.0T scanner (Siemens, Erlangen, Germany). Tumor CE was defined as newly emerged, unequivocally increased signal intensity on the T1-weighted image following intravenous contrast administration when compared to noncontrast T1 images. Nonenhancement (nCE) was defined as no apparent enhancement in tumors on postcontrast T1-weighted images, compared with regular T1-weighted images ( Figure 1). The presentation of tumor CE was evaluated by two experienced
F I G U R E 1 Examples of Contrast
Enhancement and Noncontrast Enhancement Images for Analyses. CGGA, Chinese Glioma Genome Atlas; TCGA, The Cancer Genome Atlas neuroradiologists (X.C. and J.M., both with more than 15 years of neuroradiological experiences) blinded to the patients' clinical information. A third senior neuroradiologist (S.L. with more than 20 years of neuroradiological experiences) arbitrated when necessary.
| RNA sequencing and molecular analyses
Chinese Glioma Genome Atlas RNA sequencing was performed as previously described. 8 Briefly, libraries were sequenced on the Illumina HiSeq 2000 platform using the 101-bp pair-end sequencing strategy. Short sequence reads were aligned to the human reference genome (Hg19Refseq) using the Burrows-Wheeler Aligner (BWA, Version 0.6.2-r126). 9 IDH mutations and O-6methylguanine-DNA methyltransferase promoter methylation were assessed by pyrosequencing. 10 TCGA RNA sequencing data and corresponding molecular profiles 11 were obtained from TCGA database. The genes available for our genetic analysis were more than 20 000 both in the CGGA and TCGA databases.
| Image-genomic analysis
RNA sequencing data of CE and nCE patients were subjected to gene set enrichment analysis (GSEA). GSEA was performed using dedicated software (www.broadinstitute. org/gsea). Annotated gene sets were downloaded from the Molecular Signatures Database v5.1 (MSigDB) (http://www. broad.mit.edu/gsea/msigdb/). Differentially expressed genes were selected by significance analysis of microarray (SAM) conducted with the R programming language (http://cran.rproject.org), with a false discovery rate (FDR) <0.05. Heat maps of differential genes were constructed using Gene Cluster 12 and Gene Tree View software. 13 Kaplan-Meier survival analysis was applied to estimate the survival distributions. Gene Ontology (GO) analysis was performed using the online Database for Annotation, Visualization, and Integrated Discovery (DAVID, http://david.ncifcrf.gov/). 14 GO terms were visualized by EnrichmentMap 15 and AutoAnnotate 16 plugins in Cytoscape software. 17
The GLM algorithm was repeatedly applied to extract a gene signature containing those genes that best predicted CE in tumors. Using the dimensionality reduction principle, the gene with the highest P value when classifying CE and nCE AG was eliminated from the model each time, until a target number of genes were left. A series of receiver operating characteristic curves were delineated based on the screened genes. Associated genes with the maximal area under the curve (AUC) were established as the CE specific signature. The signature derived from the training set was subsequently applied to the CGGA for validation.
The prognostic values of candidate genes in patients with AG were calculated by least absolute shrinkage and selection operator (LASSO) algorithm. For preliminary analysis, patients with overall survival times less than 30 days were excluded. The selected genes were used in developing a linear combination weighted by their respective coefficients generated by Lasso-Cox model. 18 The risk score for overall survival time of each individual was calculated as follows: We next divided patients in the training dataset into high-risk and low-risk groups using the median mRNA signature risk score as the cutoff point; patients with higher risk scores were posited to have poor survival. The same coefficients and median risk score cutoff was applied to the validation cohort.
| Radiogenomic analysis of quantitative radiomic features of CE
In TCGA database, the CE mask was manually delineated by two experienced neuroradiologists on CE images using MRIcro software (http://www.mccauslandcenter.sc.edu/ crnl/mricro). The Dice coefficient was used to measure the discrepancy between tumor masks, and a senior neuroradiologist made a final decision about the tumor border when the discrepancy was >5%. 19 Fifty-five quantitative radiomic features were extracted from the CE mask using the method as previously described. 20 The features can be classified into three groups: (a) first-order statistics, which quantitatively measure the distribution of voxel intensities within the image; (b) shape-and size-based features, which reflect the shape and size of the tumor region; and (c) textual features, which can quantify intratumor heterogeneity differences.
Firstly, univariate Cox regression was performed on the radiomic features individually in order to screen prognostic features. Subsequently, Pearson correlation algorithm was used to screen genes that were associated with the selected radiomic features. The top 500 positive/negative genes that were significantly associated with each feature were subjected to gene ontology analysis to reveal the underlying biological processes involved in each feature. When the hazard ratio of a prognostic feature was larger than 1, gene ontology analysis was performed on the positive associated genes. When the hazard ratio was less than 1, gene ontology analysis was performed on the negative associated genes. Correlations between the selected prognostic features were calculated using Spearman's correlation analysis. P < 0.05 was considered statistically significant. 21 a crucial prognostic biomarker for gliomas, were more common in the nCE group in both the training (P = 0.038) and validation (P = 0.043) sets. Additionally, there was no difference about 1p/19q status between the CE and nCE groups in the training set (P > 0.05), while the frequency of 1p/19q co-deletion was higher in the nCE group in the validation set (P = 0.005). CE patients in the CGGA dataset consistently presented high rates of IDH mutation (P = 0.043) and undifferentiated MGMT promoter methylation (P = 0.451). The frequency of chromosome 1p/19q co-deletion was higher in CGGA nCE patients (P = 0.005).
| GSEA-identified gene functions associated with tumor enhancement
To characterize CE properties, we divided TCGA AG patients into CE and nCE groups according to the postcontrast T1-weighted MR images presentation. Hallmark gene sets representing specific well-defined biological states were acquired from the MSigDB and analyzed by GSEA.
Results suggested that CE patients had upregulated angiogenesis (normalized enrichment score = 1.531, P = 0.0059) and epithelial-mesenchymal transition (normalized enrichment score = 1.462, P = 0.0303) (Figure 2). Other gene set enrichment analyses were also performed. Significant enrichment was observed in genes associated with immune response, G1/S transition of the mitotic cell cycle, extracellular matrix structural components, and focal adhesion; all are key biological processes, molecular events, and pathways involved in tumor malignancy (Table S1). The canonical pathways seen in the CE vs nCE differentially expressed genes include lymph-angiogenesis pathway, extracellular matrix organization, EPHA2 FWD pathway, focal adhesion, etc.
| Screening and annotation of differential genes
To further investigate CE-associated molecular alterations, we utilized the SAM method for differentially expressed genes filtering. After excluding genes with FDR ≥ 0.05 and fold change <20%, 169 and 68 genes positively and negatively correlating with CE, respectively, were selected. Intriguingly, in the genes that are positively corelated with CE, these are a series of well-documented genes that encode proteins promoting glioma cell malignancy, such as MMP9, an enzyme involved in the breakdown of the extracellular matrix; LIF, a cytokine that affects cell growth; and TWIST1, a transcription factor involved in epithelial-mesenchymal transition. A heat map of these 237 genes was constructed ( Figure 3A). Next, 169 CE-positive genes were subjected to DAVID analysis (Table S2). Visualized GO terms of biological processes with P < 0.05 were established ( Figure 3B). Consistently, enriched CE-associated genes were mainly those involved in immune response, vascular development, and cell adhesion.
| Signature associated with CE
Generalized Linear Model algorithms were performed to extract a meaningful gene signature from 237 genes associated with CE tumors. A group of 48 genes was selected (Table S3). Among them, 42 were CE-positive related; these included POSTN, ESM1, KMO, and other previously reported oncogenes. For validation of this CE-related signature, we applied these genes to the CGGA dataset using the GLM model. Because of the discrepancy in sequencing, F I G U R E 3 Differential Genes Screening and Gene Ontology. A, Heat map of 237 differential expressed genes and corresponding molecularpathological biomarkers. CE, contrast enhancement; CGGA, Chinese Glioma Genome Atlas; nCE, noncontrast enhancement; WT, wild type; TCGA, The Cancer Genome Atlas. B, Visualized gene ontology terms of differential genes of biological processes between the two subgroups. These biological processes, including immune response, adhesion, locomotion, and blood vessel morphogenesis, which is consistent with the consequence of gene set enrichment analysis three genes (DNAH11, LOC283314, and LOC285370) were not found in the CGGA dataset. The AUC for the remaining 45 genes in terms of classifying CGGA CE-and nCE-AG patients was 0.9787; this affirmed the CE specificity of this signature (Figure 4).
| Prognostic role of the radiogenomic signature
To further explore the prognostic effect of CE-related genes, we extracted a compact signature consisting of genes using LASSO-Cox regression analysis with 10-fold crossvalidation ( Figure 5A). Seven genes with a nonzero coefficient were TMEM26 (0.1332), MAP1LC3C (0.1112), TNFAIP6 (0.0872), GDF15 (0.0746), MEOX2 (0.0188), POSTN (0.0090), and ABCC3 (0.0006) ( Figure 5, Table S4). Therefore, the risk score could be calculated using the following formula: risk score = expr TMEM26 × 0.1332 + expr MAP1LC3C × 0.1112 + expr TNFAIP6 × 0.0872 + expr GDF15 × 0.0746 + expr MEOX2 × 0.0188 + expr POSTN × 0.0090 + expr ABCC3 × 0.0006. Next, we categorized patients into high-risk and lowrisk groups according risk score; the median risk score was the cutoff. Intriguingly, in the TCGA cohort, the overall survival curves of the high-and low-risk groups were significantly separated ( Figure 6A, P = 0.0002). Moreover, CE patients with high-risk scores had worse overall survival rates than low-risk CE patients in the TCGA cohort ( Figure 6B, P = 0.0001), which further emphasized the prognostic value of the CE-related gene expression signature. Consistently, the overall survival of the high-risk group was markedly poorer than that of the low-risk group both in AG patents and enhanced CE patients, respectively, in the CGGA cohort ( Figure 6C,D, P = 0.0060 and P = 0.0115).
| Radiogenomic analysis of quantitative radiomic features of CE
Using the univariate Cox regression, nine prognostic radiomic features were identified (Table S5). Intriguingly, all of the prognostic features were textual features (Energy, Entropy, High Gray Level Run Emphasis, Informational Measure of Correlation 1, Long Run High Gray Level Emphasis, Low Gray Level Run Emphasis, Maximum Probability, Short Run Low Gray Level Emphasis, Sum Entropy). Figure S2 shows that most of the prognostic features were associated with biological processes such as angiogenesis, cell proliferation, cell migration, response to hypoxia, etc. Correlation analysis revealed that many significant correlations existed between these features ( Figure S3), which could explain why their associated biological processes were so similar.
| DISCUSSION
Postcontrast T1-weighted MR imaging is an optimal radiological modality for diagnosis and clinical management of malignant gliomas. The leakage of the contrast agent Gd-DTPA, which is attributed to the infiltration of tumor cells and focal abnormalities in the blood-brain barrier, produces an increasing signal on T1-weighted imaging. Previous studies suggested that increased neovascular permeability may also contribute to post-CE. 22 Furthermore, an early radiogenomic study revealed that contrast-enhanced tumor volume was strongly associated with poor survival in glioblastoma. 23 Hence, CE could serve as a noninvasive indicator of a tumor's biological process. However, the potential genetic alterations and corresponding molecular pathways of contrast F I G U R E 4 Contrast Enhancement (CE)-Related Signature Establishment and Validation. A series of receiver operating characteristic curves were delineated based on the retrieved genes. The areas under the curve (AUC) for 10 genes, 20 genes, 30 genes, and 48 genes were 0.86, 0.92, 0.97, and 1.00, respectively. The predictive capability of the established signature (45 genes, excluding DNAH11, LOC283314, and LOC285370) was validated using the Chinese Glioma Genome Atlas RNA-sequencing data. The AUC for this CE-related signature was 0.9787 enhanced AG remain unclear. A previous study revealed that the presence of CE was associated with IDH mutation in glioblastomas. 24 In another study, it was found that CE could be associated with several proangiogenic and edema-related genes, including neuronal pentraxin-2 and vascular endothelial growth factor in GBM patients. 25 These studies increased the impetus for an integrative analysis of radiological presentation and multi-omics data. In the present study, we comprehensively combined classical molecular-pathological biomarkers, whole genome transcriptome sequencing, clinical characteristics, radiological manifestations, and radiomics for the first time, and established a CE-related gene expression signature that could predict malignant behaviors and unsatisfactory prognoses.
The genes that are differentially expressed in CE compared to nCE tumors have specific biological functions. Several genes have clear associations with tumorigenesis in glioma or other types of carcinoma. POSTN, encoding secreted matricellular protein Periostin, is critical for epithelial-mesenchymal transition, tumor angiogenesis, and metastasis. 26 A pioneer radiogenomic study found that POSTN was the top upregulated gene that could reflect edema/cellular invasion, and revealed that high expression of POSTN resulted in poor overall survival and progression-free survival in GBM patients. 27 MZ-1, a neutralizing monoclonal antibody to POSTN, showed significant growth inhibition both in vivo and in vitro, 28 thereby providing an alternative approach in clinical management of CE patients. KMO is a pivotal enzyme in the kynurenine-mediated tryptophan degradation pathway; it positively regulates proliferation, migration, and invasion of tumor cells, and may serve as a novel prognostic marker in various cancers. 29,30 Recently, investigators revealed the crystal structure of Saccharomyces cerevisiae KMO, in the free form and in complex with the tight-binding inhibitor UPF 648, 31 which will promote the search for new KMO inhibitors in targeted therapies against neurodegenerative diseases and tumor.
Two sets of hallmark genes were meaningfully enriched when comparing CE to nCE subgroups using GSEA analysis. Epithelial-mesenchymal transition plays a prominent role in epithelial cell invasion, resistance to apoptosis, degradation of the limiting basement membrane, and tumor dissemination. 32,33 Through this process, glioma cells can achieve augmented invasion and increased blood-brain barrier damage, leading to leakage of contrast agents. Another hallmark gene set generated by GSEA analysis concerned angiogenesis. Typically, blood vessels formed owing to an unbalanced mix of proangiogenic signals are misshapen, as evidenced by precocious capillary sprouting, convoluted and excessive vessel branching, distorted and enlarged vessels, erratic blood flow, and abnormal levels of endothelial cell proliferation and apoptosis. 34,35 Therefore, these newly formed vessels can leak and cause the accumulation of radiocontrast agent in surrounding tissues, shortening the longitudinal relaxation time of neighboring water protons. Hence, therapeutically targeting these angiogenic factors may provide an effective approach for
F I G U R E 5 Construction of Contrast
Enhancement-Based Prognostic Gene Set. A, The 10-folds cross-validation for LASSO-Cox analysis identified seven genes signature. B, The seven genes were also significant in univariate Cox regression analysis. C-D, The coefficients for the significant genes derived from LASSO-Cox model CE glioma management. GSEA results also suggested that the G1/S transition phase of the mitotic cell cycle, positive regulation of cell proliferation, and other tumor-promoting processes may contribute to postcontrast enhancement; GO analysis demonstrated consistency with the GSEA. Notably, the immune response appeared to be involved in postcontrast enhancement. Immunity-associated genes, including chemokine and the chemokine receptors CCR2, [36][37][38] CCR7, 39 and CXCL9 40 are well-documented oncogenes in numerous cancers and are involved in mediating the crosstalk between tumors and their microenvironment, as well as in promoting metastasis.
A 48-gene signature, based on 237 differential genes, was established using the GLM algorithm. These compact genes were found to be associated with inflammatory response, cell adhesion, microtubule motor activity, angiogenesis, positive regulation of cell proliferation, and positive regulation of cell division; this was consistent with the GSEA and GO results. Furthermore, WNT signaling associated genes, such as HIST1H4J, WNT16, and WNT7B were significantly involved, suggesting a role for the WNT signaling pathway in promoting postcontrast enhancement. Notably, the derived prognostic signature could significantly stratify enhanced vs nonenhanced AG patients post contrast administration. This finding promotes the gene signature as a convenient prognostic tool for neurological clinicians.
With high-throughput computing, it is now possible to rapidly extract many quantitative features from medical images (known as radiomics), providing a powerful tool of associating images with underlying biological processes or clinical events. 41 In the present study, the prognostic value of the quantitative radiomic features of CE was also investigated. Nine prognostic features were identified using the univariate Cox regression model, and all of the nine features were textural features (group 3). We hypothesize that textural features are more capable of capturing the prognostic information in patients with CE AG, since textural features have exhibited powerful prognostic value in many other studies. 20,42,43 Further radiogenommic analysis revealed that all the nine prognostic features were associated with angiogenesis, which indicates that angiogenesis might be a suitable therapeutic target for patients with CE AG.
Several limitations should be noted in the present study. First, the associations and mechanistic roles of the candidate genes have not all been experimentally confirmed in previous studies. Future larger-scale studies with mechanistic exploration are required to correlate observed imaging features with biological function. As for the prognostic signature, classifying patients using their prognosis may develop a more powerful signature, and we will try this method in our future studies. Finally, the findings of radiogenomic analysis was F I G U R E 6 Prognostic Implication of the Gene Signature in the Training and Validation Sets. Patients were divided into high-risk and lowrisk groups. Anaplastic gliomas (AG) patients with high-risk scores had worse prognoses than low-risk patients in both TCGA and CGGA datasets. Moreover, postcontrast enhanced AG patients were stratifiable by this risk signature. CE, contrast enhancement; CGGA, Chinese Glioma Genome Atlas; HR, hazard ratio (95% confidence interval); TCGA, The Cancer Genome Atlas preliminary. More quantitative radiomic features and more MR sequences should be involved in the future.
In summary, this study emphasized the relevance of whole genome transcriptomes to radiological manifestation. CE, one of the most valuable radiological features of malignant gliomas, was positively associated with tumor angiogenesis and epithelial-mesenchymal transition. We identified 48 genes derived from a pool of 237 differentially expressed genes that may serve as a CE-specific signature. Meanwhile, we also showed that a simplified signature consisting of seven genes can be used to reliably classify AG patients according to prognosis. Finally, we investigated the prognostic radiomic features of CE and revealed the underlying biological processes of the features. Therefore, our results illustrate an intrinsic correlation between radiological, molecular, and phenotypic observations, and highlight the value of the radiogenomic approach to prognostication and customized treatment guidance. | 5,325.6 | 2018-08-16T00:00:00.000 | [
"Biology"
] |
Inclusive Lambda Production in Two-Photon Collisions at LEP
The reactions e^+e^- ->e^+e^- Lambda X and e^+e^- ->e^+e^- Lambda X are studied using data collected at LEP with the L3 detector at centre-of-mass energies between 189 and 209 GeV. Inclusive differential cross sections are measured as a function of the lambda transverse momentum, p_t, and pseudo-rapidity, eta, in the ranges 0.4 GeV<p_t<2.5 GeV and |\eta|<1.2. The data are compared to Monte Carlo predictions. The differential cross section as a function of p_t is well described by an exponential of the form A exp (- p_t /<p_t>)$.
Introduction
Two-photon collisions are the main source of hadron production in high-energy e + e − interactions at LEP via the process e + e − → e + e − γ * γ * → e + e − hadrons. The outgoing electron and positron carry almost the full beam energy and their transverse momenta are usually so small that they escape, undetected, along the beam pipe. In this process, the negative fourmomentum squared of the photons, Q 2 , has an average value Q 2 ≃ 0.2 GeV 2 and they can therefore be considered as quasi-real. In the Vector Dominance Model (VDM), each photon can transform into a vector meson with the same quantum numbers, thus initiating a strong interaction process with characteristics similar to hadron-hadron interactions. This process dominates in the "soft" interaction region, where hadrons are produced with a low transverse momentum, p t . Hadrons with high p t may also be produced by the QED process γγ → qq (direct process) or by QCD processes originating from the partonic content of the photon (resolved processes).
The L3 Collaboration has previously studied inclusive π 0 , K 0 S [1] and charged hadron [2] production in quasi-real two-photon collisions. In this Letter, the inclusive Λ and Λ production 1) from quasi-real photons is studied for a centre-of-mass energy of the two interacting photons, W γγ , greater than 5 GeV. The results are expressed in bins of transverse momentum, p t , and pseudo-rapidity, η, in the ranges 0.4 GeV < p t < 2.5 GeV and |η| < 1.2. The η range is further divided in two subsamples with 0.4 GeV < p t ≤ 1 GeV and 1 GeV < p t < 2.5 GeV. The data sample corresponds to a total integrated luminosity of 610 pb −1 collected with the L3 detector [3], at e + e − centre-of-mass energies √ s = 189 − 209 GeV, with a luminosity weighted average value of √ s = 198 GeV . Results on inclusive Λ production for a smaller data sample at lower √ s were previously reported by the TOPAZ Collaboration in the range 0.75 GeV < p t < 2.75 GeV [4]. The H1 Collaboration investigated the Λ photoproduction at HERA [5].
Monte Carlo simulation
The process e + e − → e + e − hadrons is modelled with the PYTHIA [6] and PHOJET [7] event generators with two times more luminosity than the data. Both generators include the VDM, direct and resolved processes. In PYTHIA, including these processes for each photon leads to six classes of interactions. A smooth transition between these classes is then obtained by introducing a transverse momentum scale. The two-photon luminosity function is based on the EPA approximation [8] with a cut at the mass of the rho meson. The SaS-1D parametrisation is used for the photon structure [9]. PHOJET relies on the Dual Parton Model [10], with a soft and a hard component. The two-photon luminosity functions is calculated in the formalism of Reference 8. The leading-order GRV parametrisation is used for the photon structure [11]. For both programs, matrix elements are calculated at the leading order and higher-order terms are simulated by parton shower in the leading-log approximation. The fragmentation is performed using the Lund string fragmentation scheme as implemented in JETSET [6], which is also used to simulate the hadronisation process. The strangeness suppression factor in JETSET is chosen as 0.3, while a value α S (m Z ) = 0.12 is used for the strong coupling constrant. From a study of Monte Carlo events, the hard component is found to be larger than the soft component. Their ratio goes from around two at low values of p t to around three at high values of p t .
The following Monte Carlo generators are used to simulate the background processes: KK2f [12] for the annihilation process e + e − → qq (γ); KORALZ [13] for e + e − → τ + τ − (γ); KORALW [14] for e + e − → W + W − and DIAG36 [15] for e + e − → e + e − τ + τ − . The response of the L3 detector is simulated using the GEANT [16] and GHEISHA [17] programs. Time dependent detector inefficiencies, as monitored during each data taking period, are included in the simulations. All simulated events are passed through the same reconstruction program as the data.
Event selection
Two-photon interaction events are mainly collected by the track triggers [18], with a low transverse momentum threshold of about 150 MeV, and the calorimetric energy trigger [19]. The selection of e + e − → e + e − hadrons events [20] is based on information from the central tracking detectors and the electromagnetic and hadronic calorimeters. It consists of: • A multiplicity cut. To select hadronic final states, at least six objects must be detected, where an object can be a track satisfying minimal quality requirements or an isolated calorimetric cluster of energy greater than 100 MeV.
• Energy cuts. The total energy deposited in the calorimeters must be less than 40% of √ s to suppress events from the e + e − → qq(γ) and e + e − → τ + τ − (γ) processes. In addition, the total energy in the electromagnetic calorimeter is required to be greater than 500 MeV in order to suppress beam-gas and beam-wall interactions and less than 50 GeV to remove events from the annihilation process e + e − → qq(γ).
• An anti-tag condition. Events with a cluster in the luminosity monitor, which covers the angular region 31 mrad < θ < 62 mrad, with an electromagnetic shower shape and energy greater than 30 GeV are excluded from the analysis. In addition, events with an electron scattered above 62 mrad are rejected by the energy cuts.
• A mass cut. The invariant mass of all visible particles, W vis , must be greater than 5 GeV. In this calculation, the pion mass is attributed to tracks while isolated electromagnetic clusters are treated as massless. The distribution of W vis for data and Monte Carlo after all other cuts are applied is shown in Figure 1. Values of W vis up to 200 GeV are accessible.
About 3 million hadronic events are selected by these criteria with an overall efficiency of 45%. The background level of this sample is less than 1% and is mainly due to the e + e − → qq(γ) and e + e − → e + e − τ + τ − processes. The background from beam-gas and beam-wall interactions is found to be negligible.
The Λ baryons are reconstructed using the decay Λ → pπ. Secondary decay vertices are selected which are formed by two oppositely charged tracks. The secondary vertices must satisfy the following criteria: • The distance, in the plane transverse to the beam direction, between the secondary vertex and the primary e + e − interaction point must be greater than 3 mm.
• The angle between the total transverse momentum vector of the two outgoing tracks and the direction in the transverse plane between the primary interaction point and the secondary vertex must be less than 100 mrad.
The distributions of these variables are presented in Figure 2. The proton is identified as the track with the largest momentum. Monte Carlo studies show that this association is correct for more than 99% of the vertices. In addition, the dE/dx measurement of both proton and pion candidates must be consistent with this hypothesis with a confidence level greater than 0.01.
After these cuts, about 70000 events are selected. The distribution of the invariant mass of the pπ system, m(pπ), shows a clear Λ peak, as shown in Figures 3 and 4 for the different p t bins listed in Table 1. The resolution of m(pπ) is found to be around 3 MeV and is well reproduced by Monte Carlo simulations.
Differential cross sections
The differential cross sections for Λ production as a function of p t and |η| are measured for W γγ > 5 GeV, with a mean value of 30 GeV, and a photon virtuality Q 2 < 8 GeV 2 with Q 2 ≃ 0.2 GeV 2 . This phase space is defined by cuts at the generator level of the Monte Carlo.
The number of Λ baryons in each p t and |η| bin is evaluated by means of a fit to the m(pπ) spectrum in the interval 1.085 GeV < m(pπ) < 1.145 GeV. The signal is modelled with a Gaussian and the background by a fourth-degree Chebyshev polynomial. All parameters, including the mass and width of the peak, are left free. The results are given in Tables 1, 2 and 3.
The overall efficiencies, also listed in Tables 1, 2 and 3, include reconstruction and trigger efficiencies and take into account the 64% branching fraction of the decay Λ → pπ. The reconstruction efficiency, which includes effects of the acceptance and the selection cuts, is calculated with the PHOJET and PYTHIA Monte Carlo generators. As both generators reproduce well the shapes of the experimental distributions of hadronic two-photon production [20], their average is used to calculate the selection efficiency. The efficiency does not depend on the Q 2 cut-off. The track trigger efficiency is calculated for each data taking period by comparing the number of events accepted by the track and the calorimetric energy triggers. The efficiency of the higher level triggers is measured using prescaled events. The total trigger efficiency increases from 82% for p t < 0.4 GeV to 85% in the high p t region.
The differential cross section as a function of p t is first measured for the three different data samples collected in 1998, 1999 and 2000 at different values of √ s and for different trigger and machine background conditions. Good agreement is obtained between the three measurements. The different data samples are therefore combined into a single data sample.
Sources of systematic uncertainties on the cross section measurements are: background subtraction, scale and resolution uncertainties, Monte Carlo modelling and the accuracy of the trigger efficiency measurement. Their contributions are listed in Table 4. The dominant source of systematic uncertainty is due to background subtraction. It is estimated using different background parameterizations and fit intervals in the fitting procedure. The scale and resolution uncertainties are assessed by varying the selection cuts. The main contributions arises from the secondary vertex selection (3.2%) and the proton and pion identification criteria (2.5%). The uncertainty due to the selection of e + e − → e + e − hadrons events is 1%. The Monte Carlo modelling uncertainty, taken as half the relative difference between PHOJET and PYTHIA, increases with p t from 1% to 5%. A systematic uncertainty of 2% is assigned to the determination of the trigger efficiency, which takes into account the determination procedure and time stability.
The sum of the differential cross sections for the e + e − → e + e − ΛX and e + e − → e + e − ΛX processes as a function of p t for |η| < 1.2 is presented in Table 1 and Figure 5. Mass effects explain the lower value obtained in the first bin. The behaviour of the cross section for 0.75 GeV < p t < 2.5 GeV is well described by an exponential of the form A exp(−p t / p t ), as shown in Figure 5a, with a mean value p t = 368 ± 17 MeV. For comparison, the values p t ≃ 230 MeV and p t ≃ 290 MeV are obtained for inclusive π 0 and K 0 S production, respectively [1].
The data are compared to the PHOJET and PYTHIA Monte Carlo predictions in Figure 5b. The region p t < 0.6 GeV is well described by PYTHIA, whereas PHOJET overestimates the cross section. On the contrary, the region 0.6 GeV ≤ p t ≤ 1 GeV is better reproduced by PHOJET while PYTHIA lies below the data. For p t > 1 GeV both PYTHIA and PHOJET underestimate the data. This level of agreement is not unusual in two-photon physics.
The differential cross sections as a function of |η| for 0.4 GeV < p t ≤ 1 GeV and 1 GeV < p t < 2.5 GeV are given in Tables 2 and 3 and shown in Figure 6. Both Monte Carlo programs describe well the almost uniform η shape, while the size of the discrepancy on the absolute normalization depends on the p t range. Table 1: The number of Λ baryons estimated by the fit, together with the Λ mass, the overall efficiency and the corresponding differential cross section as a function of p t for |η| < 1.2. The first uncertainty on the cross section is statistical and the second systematic. |η| 13.4 ± 0.2 56 ± 1 ± 3 0.6−0.9 2904 ± 70 13.0 ± 0.2 61 ± 1 ± 4 0.9−1.2 2774 ± 89 11.1 ± 0.1 68 ± 2 ± 8 Table 2: The number of Λ baryons estimated by the fit, together with the overall efficiency and the corresponding differential cross section as a function of pseudorapidity for 0.4 GeV < p t ≤ 1 GeV. The first uncertainty on the cross section is statistical and the second systematic.
|η| Table 3: The number of Λ baryons estimated by the fit, together with the overall efficiency and the corresponding differential cross section as a function of pseudorapidity for 1 GeV < p t < 2.5 GeV. The first uncertainty on the cross section is statistical and the second systematic. Table 4: Systematic uncertainty on the cross section of the e + e − → e + e − ΛX and e + e − → e + e − ΛX processes due to background subtraction, scales and resolution uncertainties, Monte Carlo modelling and trigger efficiency. The total systematic uncertainty is the quadratic sum of the different contributions. Figure 4: The invariant mass of the pπ system for a) 1.3 GeV < p t ≤ 1.6 GeV, b) 1.6 GeV < p t ≤ 2 GeV and c) 2 GeV < p t < 2.5 GeV. The signal is modelled with a Gaussian and the background by a fourth order Chebyshev polynomial. | 3,515.4 | 2004-02-25T00:00:00.000 | [
"Physics"
] |
Experimental Study on the Softening Characteristics of Sandstone and Mudstone in Relation to Moisture Content
The kinetics of fluid-solid coupling during immersion is an important topic of investigation in rock engineering. Two rock types, sandstone and mudstone, are selected in this work to study the correlation between the softening characteristics of the rocks and moisture content. This is achieved through detailed studies using scanning electron microscopy, shear tests, and evaluation of rock index properties during exposure to different moisture contents. An underground roadway excavation is simulated by dynamic finite element modeling to analyze the effect of moisture content on the stability of the roadway. The results show that moisture content has a significant effect on shear properties reduction of both sandstone and mudstone, which must thus be considered in mining or excavation processes. Specifically, it is found that the number, area, and diameter ofmicropores, aswell as surface porosity, increase with increasing moisture content. Additionally, stress concentration is negatively correlated with moisture content, while the influenced area and vertical displacement are positively correlated with moisture content. These findings may provide useful input for the design of underground roadways.
Introduction
The physical and mechanical properties of rocks, especially rock strength measurement and classification, are fundamental to the design and engineering of rock structures such as underground roadways and tunnels.This is particularly true when such structures are built in mudstone and sandstone, which are two most widely distributed rock types encountered in underground mines.Such rock types are also frequently encountered in underground coal mining operations, which are increasing in depth and complexity.As such, fluid-solid interaction becomes important, as it influences these physical and mechanical properties, as well as the microstructure, of rock.
Approximately 90% of the rock slope failure is caused by groundwater flow in porous and fractured rock; 60% of the hazards in the coal mine are associated with groundwater and 30%-40% of the hydropower dam failure is due to the seepage of water [1].Recently, water injection was found to be effective in rock-burst relief and prevention [2,3].All of these result in a need to investigate the softening characteristic of mudstone and sandstone under different water contents.
The mechanical properties of rocks with different moisture contents have been widely studied.Van Eeckhout and Peng [4] studied the effect of relative humidity on the mechanical properties of shales and found that with an increase in moisture content there is a reduction in uniaxial compression strength (UCS) and elastic modulus and an increase in Poisson's ratio.Colback et al. [5] investigated the effects of moisture on two quartzitic rock types and found that the moisture content had a major influence on the compressive strength characteristics.Lajtai et al. [6] indicated, with time-dependent tests, that water has a substantial effect on creep strain, static fatigue, and low crack propagation velocity.Ojo and Brook [7] found that moisture content, within certain values of relative saturation, has a great influence on the stable minimum or maximum strength attained in the rock.
Okubo et al. [8] found that compressive strength decreases significantly with an increase of moisture content, but an observed increase in strength with loading rate did not depend on moisture content.Feucht and Logan (1990) found that the friction factor of a saturated sandstone was reduced by 15%, while Hawkins and Mcconnell [9] investigated the influence of water content on the strength and deformability of 35 different British sandstone samples and proposed an empirical relationship between water content and UCS.
Valès et al. [10] found that rock mechanical behavior in shale is sensitive to the saturation state and is also linked to stratification in the shale and its relative orientation to the applied stress.Pham et al. [11] found that elastic parameters and compressive strength of mudstone depend strongly on the effect of suction, while Erguler and Ulusay [12] showed that, from an oven-dried to saturated clay-bearing rock, an increasing water content caused reductions of 90% in UCS, 93% in modulus of elasticity, and 90% in tensile strength, respectively.Zhou et al. [13] carried out a dynamic compressive experiment on cement mortar with different water contents and found that the dynamic compressive strength of saturated specimens was 23% lower than that of completely dry specimens.
Additionally, the analysis of microstructures has long been an effective method to study soil and rock properties.Gianelli et al. [14] used SEM-EDAX to analyze the texture of montmorillonite during their study of water-rock interaction.Chai et al. [15] found that the quantity and size of pores in bedrock increased after the introduction of water, which decreased the sliding strength of the soil.Duraiswami and Shaikh [16] employed the SEM in the analysis of fluid-rock interaction in carbonatite.The results showed that exotic minerals in the siderite carbonatite did not crystallise from carbonate magma.
Chai et al. [17] studied the effect of water-rock interaction on the mechanical properties of marly rocks and found that the changes of minerals and microstructure can trigger shallow slope failure and develop deep creep deformation along some crash zones in the reservoir shoreline.C ¸elik et al. [18] studied photomicrographs and textures of the Ayazini tuffs using SEM.Zhang et al. [19] used XRD, SEM, and energy dispersive spectrometry to study alteration of physical and chemical properties, mineral composition, and microstructures caused by the removal of free iron oxides.
In most water-rock interaction studies, only mechanical property changes are considered and microstructural changes are typically ignored.Limited studies have been conducted on these changes on both macro-and microscopic scales.Therefore, this paper presents a series of experiments that investigate the process and characteristics of mudstone and sandstone during saturation.Not only is the relationship between mechanical properties and moisture content discussed, but we also analyze the microstructure changes under different moisture content.From this, detailed information about the influence of moisture content is obtained with regard to its influence on the softening characteristics of mudstone and sandstone.
Geological Background
The main study area is the 8214 working face of the Tongting Colliery, located in Huaibei City, Anhui, China (Figure 1).This colliery operates at a depth of 616.8-665.3m in the Permian system with a 2-m-thick immediate roof consisting of mainly siltstone and sandstone.The thickness of the predominantly mudstone floor is about 0.7 m.
There is water that has remained in the goaf area of the 8212 working face, which is considered likely to enter the working face of 8214 and cause instability of both the roof and floor.There is also the possibility of hazardous water inrush.Therefore, the study of rock softening characteristics caused by different moisture contents and saturation is of great importance to the safety and operability of the Tongting Colliery.
Sample Preparation
The samples used in the experiments are typical undisturbed samples of mudstone and sandstone taken from roof and floor of the 8214 working face.To minimize moisture content changes during transportation, the undisturbed samples were first sealed in plastic bags and then wrapped in gunny bags immediately after being obtained.They were then transported to the laboratory and processed according to the test standards [20].28 sandstone samples and 22 mudstone samples, both of Φ50 mm × 50 mm size, are used in the shear test.
Methodology
4.1.X-Ray Diffraction Analyses.XRD is an analytical technique in which a prepared sample is bombarded with an Xray beam at varying angles to determine its mineralogy [21].The XRD analysis of the mudstone and sandstone samples was carried out at the Advanced Analysis & Computation Center, China University of Mining and Technology.The various mineral phase components of the mudstone and sandstone specimens were determined from carefully prepared powdered samples (325 mesh).A polarizing microscope was used to visually inspect and petrographically describe the powdered mudstone and sandstone samples.Once XRD diffractograms were obtained, phase identification and component analysis was carried out using Jade (Materials Data Inc., California, USA).Diffractograms of the sandstone and mudstone are shown in Figure 2, and the XRD results after Jade analysis are given in Table 1.Bragg-Brentano reflection focusing, which is the most widely used diffractometric arrangement, was used during quantitative phase analysis [22].The diffraction pattern of the sandstone sample confirms the presence of quartz (70.2%) and calcium silicate (27.1%).Clay minerals were identified in the mudstone samples, of which kaolinite is the main clay mineral.
Shear Tests.
Specimens for rock mechanical tests were prepared in the laboratory using a core drilling machine; the core specimens were machined according to standards of the International Society for Rock Mechanics [23].All samples were weighed in their initial state to determine their natural moisture content.Then, a natural immersion test was designed, in which both mudstone and sandstone samples were saturated with water for different times to obtain samples with different moisture contents.Samples with short immersion times were sealed and stored upside down.
Shear tests were carried out in accordance with methods suggested by the ISRM [23].Figure 3 shows a few of the rock samples prepared for the shear tests.
Scanning Electron Microscopy.
Scanning electron microscopy is a well-established method for the characterization of surfaces in ultrahigh vacuum (UHV), high vacuum (HV), and low vacuum (LV) conditions in many different fields of research [24].In this paper, SEM was used to investigate the surface microstructure of the samples at different moisture contents by comparing various SEM images as moisture contents were changed.SEM images were processed using Image-Pro Plus (Media Cybernetics Inc., Maryland, USA) to obtain the key pore structure parameters, such as number and area.The image processing steps included noise removal, binarization, pore selection, and counting.Additionally, the surface porosity of the samples with different moisture contents were calculated and analyzed.
Rock Immersion Test Results.
As described above, mudstone and sandstone samples were subjected to immersion testing.The different immersion times for the sandstone samples were 1, 2, 3, 4, 5, 6, and 8 days.Mudstone samples were saturated in water for: 1, 2, 3, 4, and 6 days.
The average moisture content of the mudstone and sandstone samples was then calculated for each immersion time and curves of moisture content and immersion time were generated (Figure 4).
It was found that, at the beginning of immersion, the moisture content of both mudstone and sandstone increased dramatically.The moisture content of mudstone increased to 4.59% after 3 days' saturation, and that of the sandstone increased to 1.91% after 3 days.The rate of increase in moisture content becomes reduced with increasing immersion time, and the moisture content of mudstone and sandstone increased to only 4.97% and 2.27%, respectively, after 6 days' and 8 days' saturation.
Shear Test Results
. The mechanical properties investigated in this paper include shear strength, cohesion, and internal friction angle.Changes in these mechanical properties at different moisture contents are shown in Table 2.
Regression analysis was carried out to investigate the relationship between moisture content and the mechanical properties.Curves of best-fit for the experimental data take the general form of = ⋅ exp(−/) + (Figure 5) and the expressions and correlation coefficients of shear strength, cohesion and internal friction angle according to moisture content are presented in Table 3. Overall, the mechanical properties of the mudstone were found to be significantly lower than those of the sandstone, and all investigated mechanical properties tend to decrease with increasing moisture content.
Figure 4(a) further manifests that the shear strength of sandstone and mudstone decreases with the increase of water content.At moisture contents of 1.27% and 1.97% (1 day immersion), the shear strengths of the sandstone and mudstone were 64.95 MPa and 15.44 MPa, respectively.This Stress-strain curves obtained during the experiments were analyzed to study the softening characteristics caused by different moisture contents.Stress-strain curves obtained at different moisture contents with a shear angle 45 ∘ are shown in Figure 6.
It can be concluded from the curves that dramatic brittle failure occurs when the moisture content is low for both mudstone and sandstone.Shear stress drops suddenly after failure.With increasing moisture content, the properties of creep in the rock samples become increasingly important, as slow rupturing appears in both mudstone and sandstone with only moderate decreases in shear stress after rock failure.
The softening characteristics of rock samples can also be illustrated by the failure modes of mudstone and sandstone (Figures 7 and 8).Failure occurs along one shear failure surface when the moisture content is low and, in most cases, the shear failure surface is parallel to the shear angle.More shear failure surfaces appear with an increase in moisture content and, ultimately, both mudstone and sandstone samples break into numerous small pieces after shear failure.
Microscopic Analysis with Different Moisture Content.
Mudstone and sandstone block samples with different moisture contents were analyzed using SEM.The blocks were 1 × 1 × 0.5 cm in size, gold sprayed in a laboratory, and then fixed on the observation platform.The SEM analysis was conducted at the Advanced Analysis & Computation Center using a FEI QuantaTM250 instrument, with which 500, 2000, 4000, and 8000 multiple SEM images were captured.Only 4000 multiple SEM images were selected for analysis in this paper.
SEM images of the mudstone with different moisture contents are shown in Figure 9. Figure 9(a) shows an SEM image of the initial mudstone sample with a moisture content of 1.54%.Very few micropores can be observed, while tortuosity of the micropores is clearly visible.SEM images of the mudstone with moisture contents of 1.87%, 2.96%, and 4.31% are shown in Figures 9(b), 9(c), and 9(d), respectively.Furthermore, binary images of the SEM images were extracted using Image-Pro Plus (Figure 10).
For the sandstone, only the initial state (moisture content of 0.91%) and final state (7-day immersion; moisture content of 2.18%) were analyzed.The SEM images and binary images are shown in Figures 11 and 12, respectively.The parameters of the surface micropores, such as number, total area, and diameter, were simultaneously counted from the binary images and the surface porosity was calculated.Table 4 lists the parameter variations of the micropores of the sandstone and mudstone with different moisture contents.Figure 13 shows histograms and line graphs of variation of the parameters of the micropores.
Overall, the number of micropores was observed to increase with increasing moisture content and, during immersion, water primarily entered the original pores and fractures.Different swelling properties of different minerals likely led to unbalanced stresses inside the rock, resulting in formation of new fractures.This not only caused an increase in the number of micropores, but also an increase in the Shock and Vibration total area and surface porosity.Also, seepage of water into the rock led to an increase in interconnectedness of the new micropores and fractures, forming larger ones, and thus increasing the area and diameter of the micropores.
Basic Equation of Coupled Model of Seepage and Stress.
The following equations describe the quantification of the perimeter of maximum micropores (1 mudstone with moisture 1.54%; 2 mudstone with moisture content 1.87%; 3 mudstone with moisture content 2.96%; 4 mudstone with moisture content 4.31%; 5 sandstone with moisture content 0.84%; 6 sandstone with moisture content 2.10%).
various parameters required to develop the model of seepage and stress defined in further detail in Section 6.2.
(1) Equilibrium equation (2) Geometric equation Shock and Vibration (3) Constitutive equation (4) Seepage equation (5) Seepage-stress relation equation The physical and mechanical parameters of the rock mass are list in Table 5, which were obtained from converting rock sample parameters to rock mass parameters using the Hoek-Brown strength criterion [25].Different support pressures are applied so that the influence of support pressure on the roadway can be determined.
Stress Distributions Analysis.
Excavation alters the equilibrium state of in situ stress, resulting in the redistribution of in situ stresses around the roadway.Figure 15 illustrates the stress distributions in the vicinity of the underground roadway and Figure 16 shows the stress concentration at the corner of the roadway.
Stress concentration factors and stress nephograms at different moisture contents reflect not only the stress levels in rock mass but also the process of stress transfer and evolution.In general, both the center of the roof and floor undergo stress release after the roadway is excavated and form relief areas under and above the goaf, while the corners of the roadway concentrate stress and form pressurized areas.When the moisture content is 0.5%, the stress concentration factor around the roadway is 3.4, which reduces to 2.54 when the moisture content is 2%.It can be concluded that pressure has been released around the roadway, but the influenced regions related to roadway excavation will extend with increasing moisture content.
Vertical Displacement Analysis.
Figure 17 shows vertical displacement nephograms when support pressure is 3 MPa.As can be seen, vertical displacement induced by roadway excavation decreases as the distance from the roadway increases, and subsidence will occur at the roof while the floor will heave up.The displacement region becomes larger because more stress is transferred to the rock mass as the moisture content increases.The absolute value of floor heave is larger than the roof subsidence because support pressure has been applied to the roof and ribs of the roadway.
Changes in roadway closure in the vertical direction are shown in Figure 18; when support pressure is low, roadway closure increases drastically as the moisture content increases.Support pressure can greatly improve roadway conditions since it will not only reduce roadway closure but also increase stability.
Conclusions
This investigation was conducted to study the mechanical properties and microstructural changes in mudstone and sandstone.Rock samples were collected from Permian siltstone and sandstone of the Shihezi Formation at depths of 616.8-665.3m in the Tongting Colliery.Fluid-solid coupling effects were studied in terms of roadway stability and compared with different moisture contents in an underground roadway excavation simulation.Based on the results of this investigation, the following conclusions can be drawn: (1) Mudstone in the investigated areas consists of a wide range of clay minerals, with kaolinite being the main clay mineral (56.10%), while quartz and calcium silicate in the sandstone account for 70.20% and 27.10%, respectively.
(2) In all cases, the moisture contents of mudstone and sandstone increased rapidly at the beginning of immersion test.After 9 days of immersion, the moisture content of mudstone and sandstone reached 4.97% and 2.27%, respectively.
(3) All mechanical properties investigated showed a tendency to decrease with increasing moisture content.The general form = ⋅ exp(−/) + was found to describe the best-fit line for data points during regression analysis.The mechanical properties of the mudstone (compressive strength, cohesion, and internal friction angle) were found to be significantly lower than those of the sandstone.
(4) As the seepage of water into the samples increased (increasing moisture content), new pores and fractures were created and the original pores and fractures linked up, resulting in an increase in the number, total area, and diameter of micropores and surface porosity.
(5) Stress will be redistributed after roadway excavation and pressurized areas will occur near the corner of the roadway.Stress concentration at the corner will decrease, indicating a release of stress with increasing moisture content.At the same time, the influenced region caused by roadway excavation will extend.Vertical displacement and the influenced region will increase with increasing moisture content.
The roadway closure has a positive correlation with the moisture content and support pressure plays a vital role in stabilizing the underground roadway.
Figure 4 :
Figure 4: Curve of moisture content and saturation time.
) 6 . 2 .
Model Definition.As shown in Figure14, the model is 100 m wide and 96.5 m high.A roadway is excavated 20 m above the bottom.The width of the roadway is 5 m, the height of the roadway is 6.5 m, and it has a semicircular radius at the top of 2.5 m.The left and right boundaries of the model are roller boundaries, while the top and roadway surface are free boundaries.The bottom of the model has a fixed constraint boundary applied and there is a boundary load at the top of the model which represents the upper rock mass load.
Figure 15 :
Figure 15: Stress distribution of surrounding rock on different moisture content.
Figure 16 :Figure 17 :Figure 18 :
Figure 16: Stress concentration factor at the corner of roadway.
Table 2 :
The mechanical properties of rock samples under different moisture content.
Table 3 :
The expression between mechanical properties and moisture content.
Table 4 :
Parameter variations of micropores of sandstone and mudstone with different moisture content. | 4,671.8 | 2017-02-22T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geology"
] |
The Effect of Corporate Profit Tax on Attracting Foreign Direct Investment in Albania
The aim of this study is to analyze the major cause of the instability of foreign direct investments in Albania these last years, their lowest level of FDI when compared with other countries in the region, and the importance of these investments in developing countries like Albania. The size of FDI has a significant effect on the financial and macroeconomic factors, government and infrastructure factors, and the factors by fiscal nature. In this study, we disengage fiscal factors and in particular, the corporate profit tax rate. The aim of this study was to prove that in countries with weak legal infrastructure, this factor is statistically significant in attracting Foreign Direct Investment. Thus, should Albania be considered as a country with weak legal infrastructure? Due to this question, we took into consideration specific indicators measured in recent years by various world organizations by comparing them with various countries of the region. Also, we built an econometric model through multiple regression by collecting data on the tax rate applied in Albania from 19922016, and the level of FDI for this period. This is in a bid to ascertain the effect or impact that profit taxes has on FDI inflows in Albania. In conclusion, Corporate Tax Rates is a factor that has a very significant impact on the size of FDI in Albania.
Introduction
The importance of foreign direct investment is always increasing, especially after the deelopment of globalization that has occurred recently in the world. In the last two or three decades, a lot of authentic empirical studies of various researchers have been made. These researchers have proven major benefits as a host country of FDI, but the disadvantages or losses that host countries have, especially in small countries where foreign investors are favored by the respective governments for privatization or concession granting of some state companies versus a lower value than the market, is to have the benefit of the political campaign. Another reason is that economy experts believe that foreign direct investment negatively affect the balance of payments. This makes it unstable as a result of inflows from the beginning and then outflow until the repatriation of capital. However, comparing the cost of foreign direct investment with the benefits host countries have, these researchers confirmed and proved that these are very small. For this reason, developing countries most especially are always in competition with each other to create a more favorable business climate for attracting foreign direct investment.
There are two types of FDI, otherwise known as portfolio investments. They are investments that are carried out through existing financial channels between the investment country and the receiving country, while the other type are the investment that is "directly" carried out in machinery equipment , buildings, and not only during the flows of capital.
FDI in Albania are regulated by law No. 7764, dated 02.11.1993 "Law on Foreign Investment." The law reads: "Foreigners have the right to engage in economic activities without the need of a possible authorization or license. Foreign investors are treated no less favorably than Albanian citizens, except in cases involving the possession of land, a case which is handled by a separate law. Albanian legislation provides that at any time or occasion, foreign investment will be treated equally and fair and will get full protection and security." Also, according to this law, foreign direct investment includes investments of foreign individuals and institutions in the local economy. Consequently, this is used for the purchase of at least 10% of the equity of an enterprise.
Foreign direct investments are an important indicator of the confidence of foreign investors in the country. Basically, they are downgraded in terms of economic and political crises. Also, they grow in terms of improving the economic and political environment. The value of foreign direct investment in our country, measured in USD, has been unstable over the past 24 years as can be seen from Figure 1. As a result, Albania was ranked second in the region after Serbia, being placed in a more favorable position with neighboring countries due to its tourism and textiles industries above all. A high level of investment is a supporting factor for growth. This increase occurred for two reasons in our view, primarily because: ISSN 1923-4023 E-ISSN 1923 2. The high degree of privatization made during this period in our country.
Furthermore, the highest decrease is noted during the crisis year of 1997, when foreign direct investment decreased by 47%.
Figure 2. Foreign Direct Investment dynamics in the last 3 years
The behavior of FDI in 2016 has highlighted their unstable structure as can be seen in Figure 2. For several years, there is a lack of inflows of new investors in the country, unlike neighboring states, especially Macedonia and Serbia. This is very good news for the entrance of foreign companies which promises stable economic growth as they are mainly concentrated in automobile industry.
According to the statistics of the balance of payments of the Bank of Albania, FDI recorded a strong decline of 42% in the first quarter of this year compared with the same period last year. Foreign investments were 152 million euro in the first quarter, from 261 million euro in January-March 2015, and 222 million euro in first quarter of 2014.
1. According to the Bank of Albania, inflows in the form of direct investment continue to remain concentrated in the hydrocarbons sector (to the extent of 27 percent), energy (about 25 percent), construction (to the extent of 17 percent), and telecommunications sector (6 percent). Thus, the biggest decline of the flow of FDI is observed in the hydrocarbons sector. In recent years, the biggest investor in the country has been Bankers Petroleum which operates in the hydrocarbons sector. Investments in this sector during the first quarter of this year constituted 40% of total investments.
A decrease of foreign direct investments was expected. According to the official data of this company during the first quarter of 2016, capital expenditures have fallen to $13 million compared with $ 25 million at the last quarter, and $ 50 million in first quarter of 2015.
According to the statistics of the liabilities of system published by The Bank of Albania, the addition of banks capital, another element that encouraged investment in 2015, was 20% lower in the first quarter of 2016 compared with the same period last year. Mobile license, which gave a rise to foreign investment in the first half of 2015, were not given in the first months of this year. Higher taxes in the region, lack of incentives (Serbia e.g. though has taxes that are comparable to us, offers subsidies to investors) exacerbated climate of current investors', which is openly reported from all rooms abroad were some of the reasons that are keeping foreign investors away. Weibo, a major Turkish enterprise, in the textile industry has opened branches in Macedonia and then moved to Serbia. Johnson Control, Lear Corporation, and Capco are three enterprises in the area of the car support industry, which have shown interest in Albania, have now invested in Macedonia.
Also during 2016, Albania has adopted a liberal legal framework designed to create a favorable investment climate for foreign investors by adopting a new law on strategic investments in Albania. The law expressly provides as "strategic investments" private investments, public investments, and/or public-private investments in various sectors of the economy. So, through this law, the government intends to increase the FDI through concessions or public private partnership. This they did by not reducing the profit tax rate for foreign investors. FDI in Albania will continue to be an important source of capital flows and economic development.
The Aim and Methodology of the Paper
The aim of this study is to prove that based on the level of FDI in a country like Albania with weak legal infrastructure, besides the macroeconomic and financial factors, governmental factors, infrastructural factors, and fiscal factors have a great influence on the level of FDI. To fulfill the aim of the study, we have taken the tax rate of profit tax of companies. Also, we are going to see the impact that this fiscal factor has on the level of FDI by raising the following hypothesis: Hypothesis 1: In countries with weak legal infrastructures, profit tax has a strong negative correlation and impact, with FDI inflows.
The study undertakes to confront and certify the raised hypothesis.
According to the purpose of the study, it made a detailed analysis of FDI in our country based on statistical data. To realize this paper, we used several research methods such as analysis of documentation and comparative analysis which are referred to specific literature for FDI through reading. However, these methods allow us to analyze and compare the progress of the level of FDI in Albania and the tax rate on the profits of companies in the time extension. This is carried out for the period of 24 years in which the study is expanded. It also involves spatial extension by comparing them with the region. The literature used is taken from the official websites of the World Bank. Also, a major role occupied the reports published by the Bank of Albania, Ministry of Finance, INSTAT, Open Data Albania, UNCTAD etc.
Subsequently, the data collected for the study are secondary data for which we do not have evidences for their reliability because they are provided by official site. Although the data are official, our model has restrictions due to the quantitative analysis. In addition, the series of 24 Observations is just too small and can affect the drawing of conclusions. Since the rate of errors in the results of the quantitative analysis is the order of the thousandth, they are considered as acceptable errors which cannot affect the outcome of the analysis. For this reason, the model results should be seen more qualitatively rather than quantitatively. We believe that these results, which are similar but not very accurate, highlight the strong negative correlation between FDI and Corporate Tax Rates. Through computer programs such as QuickBooks and financial programs such as Alpha or Excel, and the statistical program SPSS, data analysis was carried out using the method of simple statistical regression. Through this method, it was possible to clarify the probabilistic and structural impact of the Tax Rates indicator of Corporate (as independent variable) on the level of FDI (as dependent variable). To see how parametric changes will affect FDI flows, we used an econometric model that is presented in Chapter 2. ISSN 1923-4023 E-ISSN 1923 The study is focused on measuring the degree of the influence of fiscal factors, specifically the profit tax of investment companies to the size of FDI inflows for the entire post-communist period. Before we prove this impact, we have to prove the reason Albania is considered to be a country with a weak legal infrastructure. Also, we have to support or agree with the conclusions reached by Christian Bellak, Markus Leibrecht, and Jože P. Damijan (2009) at The United Nations Conference on Trade and Development (UNCTAD, 2012). However, they have confirmed that countries that have a strong legal infrastructure may have the opportunity to apply high tariffs for profit tax. Also, they do not adversely affect the inflows of FDI and the use these corporate taxes to finance the infrastructure. Therefore, they were able to prove that countries with weak or inferior legal infrastructure should apply a lower corporate tax rate to attract FDI within a short period of time. So, there is a negative correlation of profit tax degree with the inflows of FDI.
To determine whether or not Albania is considered a country with weak legal infrastructure, we will refer to some specific indicators of 2015 which would help to determine it. Thus, they are measured and published by various international organizations. Some of these indicators are: 1. Economic globalization factors with impact on FDI for 2015: KOF is the lowest for the Western Balkans, social globalization with the value 42.56, and political globalization with the value 55.75. This indicator is related with technology and communication, the migration of citizens, capital flows, and technology transfer.
2. The legal system: The legal system is subject to frequent changes and not being applied strictly brings risk for foreign investors.
3. Facility of doing business is another indicator that foreign investors refer to, before taking decisions for investment in the host country. For this indicator, Albania is ranked 68th in a list with the participation of 189 countries for year 2014-2015. However, Albania is in a very low position compared to Macedonia which ranked 30th and Montenegro which ranked 36th.
Economic stability (inflation).
This is a complex indicator that is measured by the changes of inflation in the economy. In recent years, it ranges from 0-4% and its trend shows stability compared with other countries in the region.
Analysis and Regression
Hypothesis 1: In countries with weak legal infrastructures, profit tax has a strong negative correlation and impact with FDI inflows.
To analyze and to prove the above hypothesis and the connection between the corporate profit tax rate and the level of FDI inflows, we made an econometric model through the regression model. In this connection, we considered the profit tax rate as an independent variable, while the level of FDI inflows as a dependent variable.
Based on the data obtained from the below Table 4, which shows the profit tax rate applied in Albania from 1992 to 2016 as well as the level of FDI inflows for this 24 years period , we have found the link between these two variables which is expressed through the appropriate equation: The regression equation is: The equation is of first degree indicating that the connection is linear. The value before × is negative, which shows the negative correlation that exists between the two variables. Also, the coefficient of determination R² = 0.9313 shows a very good approximation of variables taken in the analysis. Sipas të dhë nave të regresionit më sipë r, mund të themi se CIT ë shtë shumë i rë ndë sishë m në pë rcaktimin e IHD-ve, duket shumë e qartë , dhe vlera e p = 0:00, pra është ˂ 00:05. As expressed in the Figure 3, we can say that CIT has a strong negative correlation with FDI. Therefore, the higher the tax rate on profits in countries with weak legal and road infrastructure like Albania, the lower will be the inflows of FDI. Figure 3. Foreign direct investment depending on the profit tax rate Thus, we confirmed the above hypothesis that: In countries with weak legal infrastructures, the profit tax has a strong negative correlation and impact on FDI inflows. | 3,444 | 2017-02-28T00:00:00.000 | [
"Economics"
] |
Extracting Information from Qubit-Environment Correlations
Most works on open quantum systems generally focus on the reduced physical system by tracing out the environment degrees of freedom. Here we show that the qubit distributions with the environment are essential for a thorough analysis, and demonstrate that the way that quantum correlations are distributed in a quantum register is constrained by the way in which each subsystem gets correlated with the environment. For a two-qubit system coupled to a common dissipative environment , we show how to optimise interqubit correlations and entanglement via a quantification of the qubit-environment information flow, in a process that, perhaps surprisingly, does not rely on the knowledge of the state of the environment. To illustrate our findings, we consider an optically-driven bipartite interacting qubit AB system under the action of . By tailoring the light-matter interaction, a relationship between the qubits early stage disentanglement and the qubit-environment entanglement distribution is found. We also show that, under suitable initial conditions, the qubits energy asymmetry allows the identification of physical scenarios whereby qubit-qubit entanglement minima coincide with the extrema of the and entanglement oscillations.
Most works on open quantum systems generally focus on the reduced physical system by tracing out the environment degrees of freedom. Here we show that the qubit distributions with the environment are essential for a thorough analysis, and demonstrate that the way that quantum correlations are distributed in a quantum register is constrained by the way in which each subsystem gets correlated with the environment. For a two-qubit system coupled to a common dissipative environment E, we show how to optimise interqubit correlations and entanglement via a quantification of the qubit-environment information flow, in a process that, perhaps surprisingly, does not rely on the knowledge of the state of the environment. To illustrate our findings, we consider an optically-driven bipartite interacting qubit AB system under the action of E. By tailoring the light-matter interaction, a relationship between the qubits early stage disentanglement and the qubit-environment entanglement distribution is found. We also show that, under suitable initial conditions, the qubits energy asymmetry allows the identification of physical scenarios whereby qubit-qubit entanglement minima coincide with the extrema of the AE and BE entanglement oscillations.
T he quantum properties of physical systems have been studied for many years as crucial resources for quantum processing tasks and quantum information protocols [1][2][3][4][5] . Among these properties, entanglement, non-locality, and correlations between quantum objects arise as fundamental features 6,7 . The study of such properties in open quantum systems is a crucial aspect of quantum information science 8,9 , in particular because decoherence appears as a ubiquituous physical process that prevents the realisation of unitary quantum dynamics-it washes out quantum coherence and multipartite correlation effects, and it has long been recognised as a mechanism responsible for the emergence of classicality from events in a purely quantum realm 10 . In fact, it is the influence of harmful errors caused by the interaction of a quantum register with its environment 10-13 that precludes the construction of an efficient scalable quantum computer 14,15 .
Many works devoted to the study of entanglement and correlations dynamics in open quantum systems are focused on the analysis of the reduced system of interest (the register) and the quantum state of the environment is usually discarded [16][17][18][19][20] . There have recently been proposed, however, some ideas for detecting system-environment correlations (see e.g., refs. 21, 22, and references therein). For example, experimental tests of systemenvironment correlations detection have been recently carried out by means of single trapped ions 23 . The role and effect of the system-environment correlations on the dynamics of open quantum systems have also been studied within the spin-boson model 24,25 , and as a precursor of a non-Markovian dynamics 26 . Here, we approach the qubit-environment dynamics from a different perspective and show that valuable information about the evolution of quantum entanglement and correlations can be obtained if the flow of information between the register and the environment is better understood.
It is a known fact that a quantum system composed by many parts cannot freely share entanglement or quantum correlations between its parts [27][28][29][30][31][32] . Indeed, there are strong constrains on how these correlations can be shared, which gives rise to what is known as monogamy of quantum correlations. In this paper we use monogamic relations to demonstrate that the way that quantum correlations are distributed in a quantum register is constrained by the way in which each subsystem gets correlated with the reservoir 27,33,34 , and that an optimisation of the interqubit entanglement and correlations can be devised via a quantification of the information flow between each qubit and its environment.
We consider a bipartite AB system (the qubits) interacting with a third subsystem E (the environment). We begin by assuming that the whole 'ABE system' is described by an initial pure state r ABE 0 ð Þ~r AB 0 ð Þ6r E 0 ð Þ; i.e., at t 5 0, the qubits and the environment density matrices, r AB and r E , need to be pure.
The global ABE evolution is given by where d / ij denotes the quantum discord [36][37][38] , and S ijj is the conditional entropy 6,7 . Since the tripartite state ABE remains pure for all time t, we can calculate, even without any knowledge about E, the entanglement E ij between each subsystem and the environment. We do so by means of discord. We also compute the quantum discord between each subsystem and the environment as (see the Methods section) We note that, in general, d / AE =d / EA and d / BE =d / EB , i.e., these quantities are not symmetric. Directly from the KW relations, such asymmetry can be understood due to the different behaviour exhibited by the entanglement of formation and the discord for the AB partition; e.g., the equality holds for bipartite pure states). In our setup the AB partition goes into a mixed state due to the dissipative effects and the qubits detuning 39,40 . In our calculations, the behaviour of d / iE and d / Ei , i 5 A, B, is similar, so we only compute, without loss of generality, those correlations given by equations (3). An important aspect to be emphasised on the KW relations concerns its definition in terms of the entanglement of formation. Although the original version of the KW relations is given in terms of the entanglement of formation and classical correlations defined in terms of the von Neumann entropy, this is not a necessary condition. Indeed, similar monogamic relations can be determined by any concave measure of entanglement. In this sense, we can define a KW relation in terms of the tangle or even the concurrence, since they both obey the concave property. For instance, in [41] the authors use the KW relation in terms of the linear entropy to show that the tangle is monogamous for a system of N qubits. Here we use the entanglement of formation and quantum discord given their nice operational interpretations, but we stress that this is not a necessary condition.
We illustrate the above statements by considering qubits that are represented by two-level quantum emitters, where j0 i ae, and j1 i ae denote the ground and excited state of emitter i, respectively, with individual transition frequencies v i , and in interaction with a common environment (E) comprised by the vacuum quantised radiation field 39,40 , as schematically shown in Fig. 1(a), where V denotes the strength of the interaction between the qubits.
The total Hamiltonian describing the dynamics of the whole ABE system can be written as ", i 52m i ?E i gives the qubit-field coupling, with m i being the i-th transition dipole moment and E i the amplitude of the coherent driving acting on qubit i located at positionr i . The two emitters are separated by the vectorr and are characterised by transition dipole moments m i ;AE0 i jD i j1 i ae, with dipole operators D i , and spontaneous emission rates C i . Given the features of the considered physical system, we may assume a weak system-environment coupling such that the Born-Markov approximation is valid, and we work within the rotating wave approximation for both the system-environment and the system-external laser Hamiltonians 42 . Within this framework, the effective Hamiltonian of the reduced two-qubit AB system, which takes into account both the effects of the interaction with the environment and the interaction with the coherent laser field, can be written as where , and V is the strength of the dipole-dipole (qubit) coupling which depends on the separation and orientation between the dipoles 39,40,42 .
In order to impose the pure initial condition to the ABE system required to use the KW relations, we suppose that the quantum register is in a pure initial state and that we have a zero temperature environment. Thus, However, we note that a less controllable and different initial state for the environment can be considered since an appropriate purification of the environment E with a new subsystem E' could be realised. Despite this, for the sake of simplicity in calculating the quantum register dynamics, we consider a zero temperature environment. The results below reported require a quantification of the qubits dissipative dynamics. This is described by means of the quantum master equation 39,40 : where the commutator C U : gives the unitary part of the evolution. The individual and collective spontaneous emission rates are considered such that C ii 5 C i ; C, and C ij~C Ã ji :c, respectively. For simplicity of writing, we adopt the notation r ij , where i, j 5 1, 2, 3, 4 for the 16 density matrix elements; S i r ii 5 1.
The master equation (7) gives a solution for r AB (t) that becomes mixed since it creates quantum correlations with the environment. We pose the following questions: i) How does each qubit get entangled with the environment? ii) How does this depend on the energy mismatch between A and B?, and iii) on the external laser pumping?
Results
Quantum register-environment correlations. To begin with the quantum dynamics of the qubit-environment correlations, we initially consider resonant qubits, v 1 5 v 2 ; v 0 . In the absence of optical driving, there is an optimal inter-emitter separation R c which maximises the correlations 43 . In Fig. 1(b) we plot the quantum discord d / AE , and d / AB , and the entanglement of formation E AE as a function of the interqubit separation k 0 r at t 5 C 21 . The maximum value reached by each correlation is due to the behaviour of the collective damping c, which reaches its maximum negative value at the optimal separation k 0 R c^0 :674, as shown in Fig. 1(b), with k 0 5 v 0 /c. This is due to the fact that the initial state Y z j i1 ffiffi ffi 2 p 01 j iz 10 j i ð Þdecays at the rate C 1 c (see equations (8) with a 5 1/2), and hence the maximum life-time of jY 1 ae is obtained for the most negative value of c: for any time t, the correlations reach their maxima precisely at the interqubit distance R c (the same result holds for the BE bipartition, not shown). We stress that it is the collective damping and not the dipolar interaction that defines the distance R c . For a certain family of initial states (which includes jY 1 ae), the free evolution of the emitters is independent of the interqubit interaction V 44 : for the initial states (7) admits an analytical solution and the non-trivial density matrix elements read and r 32 ðtÞ ¼ ðr 23 ðtÞÞ Ã . This solution implies that the density matrix dynamics dependence on V vanishes for a 5 1/2 (jY 1 ae), and hence the damping c becomes the only collective parameter responsible for the oscillatory behaviour of the correlations, as shown in Fig. 1(b). A similar analysis can be derived for the initial states Thus, the 'detrimental' behaviour of the system's correlations d / AB and E AB reported in [43] is actually explained because such b states are not, in general, 'naturally' supported by the system's Hamiltonian since they are not eigenstates of We now consider the qubits full time evolution and calculate the correlations dynamics for the whole spectrum of initial states jY(a)ae, 0 # a # 1. The emitters' entanglement E AB exhibit an asymptotic decay for all a values, with the exception of the two limits a R 0 and a R 1, for which the subsystems begin to correlate with each other and the entanglement increases until it reaches a maximum before decaying monotonically, as shown in Fig. 1(c). The AB discord also follows a similar behaviour; this can be seen in Fig. 1(e) for a 5 0. Initially, at t 5 0, the entanglement E AE (Fig. 1(d)) equals zero because of the separability of the tripartite ABE state at such time. After this, the entanglement between A and E increases to its maximum, which is reached at a different time (t , C 21 ) for each a, and then decreases asymptotically. The simulations shown in Figs. 1(c) and (d) have been performed for the optimal inter-emitter separation R c . These allow to access the dynamical qubit information (entanglement and correlations) exchange between the environment and each subsystem for suitable qubit initialisation. Figure 1(c) shows the quantum entanglement between identical emitters: E AB is symmetric with respect to the initialisation a 5 1/2, i.e., the behaviour of E AB is the same for the separable states j01ae and j10ae. In contrast, Fig. 1(d) exhibits a somewhat different behaviour for the entanglement AE, which is not symmetric with respect to a: the maximum reached by E AE increases as a tends to 0 (the discord d / AE follows the same behaviour-not shown). The dynamical distribution of entanglement between the subsystem A and the environment E leads to the following: it is possible to have near zero interqubit entanglement (e.g., for the a 5 1 initialisation) whilst the entanglement between one subsystem and the environment also remains very close to zero throughout the evolution.
This result stresses the sensitivity of the qubit-environment entanglement (and correlations) distribution to its qubit initialisation. To understand why this is so (cf. states j01ae and j10ae), we analyse the expressions for d / AE and E AE . From equations (2) and equations (3), and since E AB and d / AB are both symmetric, the asymmetry of d / AE and E AE should follow from the conditional entropy S AjB 5 S(r AB ) 2 S(r B ). This is plotted in the solid-thick-black curve in Fig. 1(f). The behaviour of the conditional entropy is thus reflected in the dynamics of quantum correlations and entanglement between A and E, and this can be seen if we compare the behaviour of E / AE around t 5 C 21 throughout the a-axis in Fig. 1(d), with that of S AjB shown in Fig 1(f). Since this conditional entropy gives the amount of partial information that needs to be transferred from A to B in order to know r AB , with a prior knowledge of the state of B 45 , we have shown that this amount of information may be extracted from the dynamics of the quantum correlations generated between the qubits and their environment.
Interestingly, by replacing the definition of d / AB 36 into the first equality of equations (2), we find that the entanglement of the AE partition is exactly the post-measure conditional entropy of the AB partition: that is, the entanglement between the emitter A and its environment is the conditional entropy of A after the partition B has been measured, and hence the asymmetric behaviour of E AE can be verified by plotting this quantity, as shown by the solid-blue curve of Fig. 1(f). A physical reasoning for the asymmetric behaviour of the AE correlations points out that for a R 0 the state j10ae has higher weights throughout the whole dynamics. For instance, for a 5 0 the subsystem B always remains close to its ground state, and transitions between populations r 22 and r 33 do not take place, as it is shown in the inset of Fig. 1(e). This means that partition B keeps almost inactive during this specific evolution and therefore does not share much information, neither quantum nor locally accessible with partition A and the environment E. This can be seen from the quantum discord d / BE , which is plotted as the dotted-dashed-brown curve of Fig. 1(e). We stress that this scenario allows A to get strongly correlated with the environment E.
Although E AE and d / AE are not 'symmetric' with respect to a, it is the information flow, i.e., the way the information gets transferred between the qubits and the environment, the quantity that recovers the symmetry exhibited by E AB in Fig. 1(c). In other words, if the initial state were j01ae, or in general, a R 1, the partition A would remain almost completely inactive and the flow of information would arise from the bipartite partition BE instead of AE. A simple numerical computation for a 5 0 at t 5 C 21 shows that with S A 5 0.96 and S B 5 0.09, respectively. This means that the state of subsystem B is close to a pure state (its ground state), and no much information about it may be gained. Instead, almost all the partial information on the state of A can be caught regardless of whether the system B is measured or not. From this simple reasoning, and by means of the KW relations, the results shown in Figs. 1(c-f) arise. The opposite feature between r A and r B occurs for a 5 1, and in this case, it is the partition BE that plays the strongest correlation role.
Information flow in laser-driven resonant qubits. H L conveys an additional degree of control of the qubits information (entanglement and discord) flow. Let us consider a continuous laser field acting with the same amplitude, , 1 5 , 2 ; ,, on the two emitters, and in resonance with the emitters' transition energy, v L 5 v 0 . The subsystem A gets the strongest correlated with the environment for the initial pure state j10ae in the relevant time regime (see Fig. 1(d)), but this correlation monotonically decays to zero in the steady-state regime. In Fig. 2 we see the effect of the laser driving for the initial state j10ae, for qubits separated by the optimal distance r 5 R c . The laser field removes the monotonicity in the entanglement and correlations decay between A and E, and, as shown in Fig. 2(a), the more intense the laser radiation (even at the weak range , # C), the more entangled the composite AE partition becomes. This translates, in turn, into a dynamical mechanism in which the qubit register AB gets rapidly disentangled and, even at couplings as weak as , , 0.4C, the qubits exhibit early stage disentanglement, as shown in Fig. 2(b). This regime coincides with the appearance of oscillations in the AE entanglement (see Fig. 2(a)), and steady nonzero AE entanglement translates into induced interqubit (AB) entanglement suppression by means of the laser field. By tailoring the laser amplitude we are able to induce and control the way in which the qubits get correlated with each other and with the environment. The graphs 2(c) and (d) show three different scenarios in terms of such amplitude. In graph (c) we plot E AE (solid-blue curve) and E BE (dashed-dotted-grey curve) for the symmetric lightmatter interaction (, 1 5 , 2 ; , 5 0.8C), which leads to ESD in the partition AB (see graph (b)), as well as to a symmetric qubit-environment correlation in the stationary regime. However, as can be seen in main graph (d), where we have assumed , 1 ? , 2 , the breaking of this symmetry completely modifies the qubit-environment entanglement, and now it is qubit A that gets strongly correlated with the environment, while qubit B remains weakly correlated during the dynamics. The opposite arises for , 1 = , 2 (inset of panel (d)): E BE becomes much higher than E AE , which decays monotonically after reaching its maximum. Remarkably, we notice that these two asymmetric cases lead a nonzero qubit-qubit entanglement as shown in the inset of graph (c), where equal steady entanglement is obtained. It means that the qubits early stage disentanglement 19,20 can be interpreted in terms of the entanglement distribution between the qubits and the environment. We interpret this behaviour as the flow and distribution of entanglement in the different partitions of the whole tripartite system 46 , and hence this result shows that an applied external field may be used to dictate the flow of quantum information within the full tripartite system.
Flow of information in detuned qubits. We now consider a more general scenario in which each two-level emitter is resonant at a different transition energy, and hence a molecular detuning D 2 5 v 1 2 v 2 arises; v 0 ; (v 1 1 v 2 )/2. Such a detuning substantially modifies the qubit-qubit and qubit-environment correlations. Since D 2 ? 0, the a 5 1/2-time independence of equations (8) with respect to V no longer holds, and the critical distance R c of Fig. 1(b) becomes strongly modified: the information flow exhibits a more involved dynamics precisely at distances r , R c , and the intermediate suband super-radiant states are no longer the maximally entangled Bell states.
As shown in Fig. 3, the oscillations of AE and AB entanglement (and their maxima) start to decrease and become flat as the molecular detuning rises (D 2 5 0 corresponds to the case shown in Fig. 1(b)). This means that now, it is not only the collective decay rate c that modulates the behaviour of the entanglement and the correlations, but also the interplay between the detuning D 2 and the dipole-dipole interaction V. Note from Fig. 3 that the critical distance R c for which both the correlations of partition AB and those of AE get their maxima disappears with the inclusion of the molecular detuning, and E AB and E AE exhibit maxima at different inter-emitter distances as the detuning increases: E AB remains global maximum for resonant qubits (D 2 5 0) whereas E AE reaches its global maximum for a certain D 2 ? 0 (e.g., D 2 /C 5 8 at k 0 r , 0.1), a value for which E AB is stationary for almost all interqubit separation r.
To complete the analysis of the general tripartite ABE system, we now consider that the asymmetric (detuned) qubits are driven by an external laser field on resonance with the average qubits transition energy, v 0 5 v L , as shown in Fig. 4 for the two-qubit initial state jY 1 ae. We have plotted the entanglement dynamics E AB , E AE , and E BE . Fig. 4(a) shows the entanglement evolution for two identical emitters in the absence of the external driving. The molecular detuning, and the laser excitation have been included in graphs (b) and (c), respectively. The panel (d) shows the entanglement evolution under detuning and laser driving. The monotonic decay of E ij for resonant qubits in Fig. 4(a) is in clear contrast with the E ij -oscillatory behaviour due to the qubit asymmetry, as plotted in Fig. 4(b). A non-zero resonant steady entanglement is obtained thanks to the continuos laser excitation (Fig. 4(c) and (d)). These graphs have been compared with that of the clockwise flow of pairwise locally inaccessible information L P~d / BE zd / AB zd / EA 46 , as shown in the long-dashed black curve.
Discussion
We can now interpret the entanglement dynamics of the AB partition by means of the dynamics of the clockwise quantum discord distribution in the full tripartite system (L P ), and that of the entanglement of AE and BE partitions. From the conservation law 27 between the distribution of the entanglement of formation and discord followed from Eqs. (2) and (3), and noting that L P~LQ for pure states 46 , where L Q~d / BA zd / EB zd / AE , a direct connection between qubitqubit entanglement and qubit-environment entanglement can be established 46 : We note from equation (11) and from the pairwise locally inaccessible information that by knowing d / AB (or d / BA ), we can exactly compute the qubit-qubit entanglement in terms of the system bipartitions AE and BE. In particular, we show how a profile of the qubit-qubit entanglement might be identified from the partial information obtained from E AE and E BE , as indicated in the right-handside of equation (11), and shown in Figs. 4(b) and (d) whereby the local minima of E AB occur at times for which the extrema of E AE and E BE take place. However, it is interesting to note that the locally inaccessible information L P , which gives a global information of the whole tripartite system (the distribution of quantum correlations-discord), can be extracted directly from the quantum state of the register. This fact can be demonstrated by replacing equation (9), and its equivalent formula for the bipartition (11): This relation means that the entanglement of partition AB plus local accessible information of subsystems A and B, i.e. the postmeasured conditional entropies S M A i and S M B i , give complete information about the flow of the locally inaccessible (quantum) information.
To summarise, we have shown that the way in which quantum systems correlate or share information can be understood from the dynamics of the register-environment correlations. This has been done via the KW relations established for the entanglement of formation and the quantum discord. Particularly, we have shown that the distribution of entanglement between each qubit and the environment signals the results for both the prior-and post-measure conditional entropy (partial information) shared by the qubits. As a consequence of this link, and in particular equation (9), we have also shown that some information (the distribution of quantum correlations-L P ) about the whole tripartite system 46 can be extracted by performing local operations over one of the bipartitions, say AB, and by knowing the entanglement of formation in the same subsystem (equation (12)). We stress that these two remarks are completely independent of the considered physical model as they have been deduced from the original definition of the monogamy KW relations (see the Methods section). On the other hand, considering the properties of the specific model here investigated (which may be applicable to atoms, small molecules, and quantum dots arrays), the study of the dynamics of the distribution of qubit-environment correlations led us to establish that qubit energy asymmetry induces entanglement oscillations, and that we can extract partial information about AB entanglement by analysing the way in which information (entanglement and discord) flows between each qubit and the environment, for suitable initial states. Particularly, we have shown that the qubits early stage disentanglement may be understood in terms of the laser strength asymmetry which determines the entanglement distribution between the qubits and the environment. In addition, we have also shown that the extrema of the qubitenvironment AE and BE entanglement oscillations exactly match the AB entanglement minima. The study here presented has been done without need to explicitly invoque any knowledge about the state of the environment at any time t . 0.
An advantage of using the information gained from the systemenvironment correlations to get information about the reduced system's entanglement dynamics is that new interpretations and understanding of the system dynamics may arise. For instance, one of us 47 has used this fact to propose an alternative way of detecting the non-Markovianity of an open quantum system by testing the accessible information flow between an ancillary system and the local environment of the apparatus (the open) system.
Methods
We give a brief introduction to the monogamy relation between the entanglement of formation and the classical correlations established by Koashi and Winter: As a theorem, KW established a trade-off between the entanglement of formation and the classical correlations defined by Henderson and Vedral 37 . They proved that 34 : Theorem. When r AB9 is B-complement to r AB , where B-complement means that there exist a tripartite pure state r ABB9 such that Tr B [r ABB9 ] 5 r AB9 and Tr B9 [r ABB9 ] 5 r AB , where S A :5 S(r A ) is the von Neumann entropy of the density matrix r A ; Tr B [r AB ] 5 Tr B9 [r AB9 ], E AB :5 E(r AB ) is the entanglement of formation, and J / AB' :~J / r AB' ð Þleads the classical correlations. For our purpose we only show some steps in the proof of the KW relation (equation (13)); the complete proof can be straight-forwardly followed in 34 . By starting with the definition of the entanglement of formation: where the minimum is over the ensamble of pure states {p i , jy i ae} satisfying X i p i y i j i y i h j~r AB , it is possible to show, after some algebra, that Conversely, from the definition of classical correlations 37 : www.nature.com/scientificreports | 6,959.4 | 2014-06-10T00:00:00.000 | [
"Physics"
] |
Machine-learning model to predict the tacrolimus concentration and suggest optimal dose in liver transplantation recipients: a multicenter retrospective cohort study
Titrating tacrolimus concentration in liver transplantation recipients remains a challenge in the early post-transplant period. This multicenter retrospective cohort study aimed to develop and validate a machine-learning algorithm to predict tacrolimus concentration. Data from 443 patients undergoing liver transplantation between 2017 and 2020 at an academic hospital in South Korea were collected to train machine-learning models. Long short-term memory (LSTM) and gradient-boosted regression tree (GBRT) models were developed using time-series doses and concentrations of tacrolimus with covariates of age, sex, weight, height, liver enzymes, total bilirubin, international normalized ratio, albumin, serum creatinine, and hematocrit. We conducted performance comparisons with linear regression and populational pharmacokinetic models, followed by external validation using the eICU Collaborative Research Database collected in the United States between 2014 and 2015. In the external validation, the LSTM outperformed the GBRT, linear regression, and populational pharmacokinetic models with median performance error (8.8%, 25.3%, 13.9%, and − 11.4%, respectively; P < 0.001) and median absolute performance error (22.3%, 33.1%, 26.8%, and 23.4%, respectively; P < 0.001). Dosing based on the LSTM model’s suggestions achieved therapeutic concentrations more frequently on the chi-square test (P < 0.001). Patients who received doses outside the suggested range were associated with longer ICU stays by an average of 2.5 days (P = 0.042). In conclusion, machine learning models showed excellent performance in predicting tacrolimus concentration in liver transplantation recipients and can be useful for concentration titration in these patients.
Results
Overall, 6264 tacrolimus samples were collected up to 14 days post-liver transplantation from 443 patients who underwent liver transplantation at the Seoul National University Hospital (Supplementary Fig. S1).Among this group, 355 (80%) patients were randomly selected to train the model, and the remaining 88 (20%) were used for the test data for internal validation (Table 1).All patients received mycophenolate mofetil and steroids in combination with tacrolimus for immunosuppression.
Figure 1 shows the structure of the model.The best performance was achieved with the following variables: two times daily tacrolimus doses, whole blood tacrolimus concentration, weight, serum AST, and creatinine levels.The details of the feature selection and hyperparameter optimization are provided in Supplementary The effect of each input feature was illustrated in the SHAP summary plot (Supplementary Fig. S2).Specifically, the increase in previously measured tacrolimus concentration, administered tacrolimus dose, serum AST, and age were associated with a higher level of next tacrolimus concentration.
Both machine-learning models outperformed the conventional LR and the population PK models (Table 2).Specifically, the LSTM exhibited the best predictive performance among the models.The GBRT model was also S3 illustrates the correlation between the observed tacrolimus blood concentration and the predicted blood concentration from the models.
In the external validation, the LSTM model trained solely with the Seoul National University Hospital data was applied to the eICU-CRD dataset of 106 liver transplants.Although the overall performance error increased (Table 2), the LSTM model's performance was maintained in the external validation (RMSE of 1.7 ng/mL, MAE of 1.5 ng/mL, MDPE of 8.8%, and MDAPE of 22.3%).The performances of the GBRT, LR, and PK models were relatively poor in the external validation compared to those of the LSTM model (RMSE, MAE, MDPE, and MDAPE of 2.2 ng/mL, 1.9 ng/mL, 25.3%, and 33.1%, respectively, for the GBRT model; RMSE, MAE, MDPE, and MDAPE of 2.0 ng/mL, 1.6 ng/mL, 13.9%, and 26.8%, respectively, for the LR model; RMSE, MAE, MDPE, and MDAPE of 2.3 ng/mL, 1.8 ng/mL, − 11.4%, and 23.4%, respectively, for the PK model, all P < 0.001).
Table 3 compares the predicted tacrolimus concentration with the observed concentration by evaluating the administered dose against the dose suggested by the LSTM model.The results showed that when the patients received the suggested tacrolimus doses predicted by the LSTM model, a significantly high frequency of the actual concentration was within the therapeutic range (P < 0.001).The LSTM model identified clinical underdosing or overdosing in 64% of administered doses during the early post-transplant period.ICU stays were longer for patients receiving tacrolimus doses outside the suggestion (193 vs. 250 patients; mean (standard deviation) ICU stay: 8.0 (16.3) vs. 5.5 (5.7) days, P = 0.042).Even with suggested doses, concentrations fell outside the target range at rates of 12%, 15%, and 13% for the LSTM, GBRT, and LR models, respectively.
In clinical outcomes, tacrolimus concentrations exceeding the target range in the early post-transplant period were associated with liver transplant rejection (197 vs. 244 patients; 16% vs. 9%, P = 0.031).However, exceeding the therapeutic range or high intra-patient variability of tacrolimus were not associated with acute kidney failure or CMV infection.
Discussion
This study developed and externally validated machine-learning models to predict tacrolimus concentrations in liver transplantation recipients.Our model showed clinically acceptable performance, superior to the conventional LR and PK models in predicting tacrolimus concentrations during the postoperative period, and it was well maintained in the external validation.Translating tacrolimus concentration predictions into dosage recommendations revealed that deviations from suggested doses were associated with exceeding the target range and prolonged ICU stays.
Several population PK models for predicting tacrolimus concentration have been developed in adult liver transplantation recipients.However, a recent systematic analysis found that these models exhibit inadequate accuracy in external validation 22 .Only 37% of the 16 models reviewed had an acceptable level of accuracy with an MDPE of < 20%, and all 16 models demonstrated poor precision, as indicated by an MDAPE of > 30% [22][23][24] .These poor outcomes can be attributed to the complex and non-linear kinetics of tacrolimus in liver transplantation recipients.The drug kinetics can be influenced by several factors, such as varying bioavailability 25 , changes in albumin synthesis, erythrocytes where tacrolimus binds 26 , or the distribution and elimination process following biliary complications 27 .Additionally, drug clearance in transplantation recipients is time-dependent since the metabolic function of the liver improves during the early post-transplant stage 28 .
Therefore, to address these complexities, we used a data-driven approach and machine-learning algorithms that could capture the time-dependent non-linear relationship between drug doses and the effect, as we demonstrated in a previous study 22 .Our LSTM-based model showed superior predictive performance in external validation, with an MAE, MDPE, and MDAPE of 1.5 ng/mL, 8.8%, and 22.3%, respectively.These metrics fall within the preset criteria of population PK models for external validation (MDPE ≤ ± 20% and MDAPE ≤ ± 30%) [22][23][24] .
Table 3. Number and proportion of patients following the suggested doses of the LSTM model versus achieving the target concentration range.Each cell contains the number of cases according to whether the actual dose and tacrolimus concentration were lower, on target, or higher than the dose suggested by the machine-learning model and the target concentration range (8-10 ng/mL), respectively.The observed tacrolimus concentration range significantly differed depending on whether the actual dose was consistent with the suggested dose (P < 0.001)..These results demonstrate potential for generalizability in predicting tacrolimus concentration in liver transplantation recipients.
Few studies have implemented model-guided dosing algorithms in clinical settings due to the small clinical population and predictive model inaccuracies 30 .Our results showed that patients administered the suggested tacrolimus doses predicted by the LSTM model experienced a considerably higher frequencies of actual concentrations within the therapeutic range.In addition, patients who received doses outside the suggested range were associated with longer ICU stays by an average of 2.5 days (P = 0.042).These results align with previous studies demonstrating that personalized, dynamic tacrolimus dosing over time also showed shorter median hospital stays compared to conventional dosing (10 vs. 15 days) 31 .Our model-guided dosing algorithm has the potential to improve patient clinical outcomes when employed during the early post-transplant period.
The small positive bias of the LSTM model in the external validation may be attributed to racial differences in bioavailability.Lu et al. reported that Asians have higher bioavailability than non-Asians 32 .This discrepancy in bioavailability could result in over-prediction of drug concentration when applying models developed for Asians to other racial datasets.However, variations in measurement methods or factors, including residual noise, could also contribute to these differences.Therefore, further analysis is necessary for validation.
Among the various combinations of clinical covariates reflecting overall graft function (ALT, AST, total bilirubin, and INR) 33 , our model incorporating AST demonstrated better performance.AST and ALT levels are sensitive markers of hepatocellular injury within the first 7 days post-transplantation period, rapidly reflecting graft function 33,34 .In contrast, total bilirubin and INR levels during the first 6 days post-transplantation could be influenced more by the recipient's pre-transplant status than by the new graft function 33,34 .This distinction may explain the superior performance of our model incorporating AST over other covariates during rapid changes of liver function in the early post-transplant period.
While we propose using machine learning-assisted concentration prediction and dose adjustment for tacrolimus, therapeutic drug monitoring remains essential.The reduced predictive capability of the model without concentration data highlights the necessity for therapeutic drug monitoring.In addition, the LSTM model misidentified the administered doses as correct doses in 12% of test datasets.Suggesting the median of the possible dose combinations expected to fall within the target range could reduce incorrect dose suggestions, but requires laboratory confirmaiton.The benefits and feasibility of the LSTM-assisted approach in suggesting tacrolimus doses, alongside therapeutic drug monitoring, warrants further confirmation in prospective studies.
Our study had some limitations.First, although our model's performance was externally validated in different races and locations, this is a retrospective study, and bias may exist.For example, the clinician's aim to maintain a proper tacrolimus concentration resulted in an imbalanced data distribution with limited data outside of the target range and poor predictive performance 35 .Therefore, additional data beyond the clinical range can improve our model's accuracy.Second, although we replaced the missing values using multiple imputations, our model still requires several laboratory tests, such as those involving serum ALB, creatinine, ALT, AST, and HCT.These results may not be available daily for three consecutive days at some centers, either due to protocol differences or resource limitations, particularly in developing countries.Third, additional covariates, such as the genotype of metabolic enzymes, might affect the tacrolimus dose-concentration relationship.However, adding these covariates to the model remains difficult in most clinical settings.Fourth, our model for twice-daily dosing in the early post-operative period has limited applicability for patients rapidly converting to once-daily dosing 36 .Future studies on predictive models for once-daily dosing could address this limitation.
In conclusion, we developed machine-learning models that predict tacrolimus concentrations in liver transplantation recipients.Our LSTM model demonstrated excellent performance in external validation.Dosing based on the model's suggestions were resulted in concentrations within the therapeutic range in more cases.Patients who received doses outside the suggested range were associated with longer ICU stays.Therefore, this approach can be useful for accurately predicting tacrolimus concentrations and suggesting appropriate doses for patients undergoing liver transplantation to improve clinical outcomes.
This underscores the potential of machine learning algorithms for tacrolimus concentration prediction and dosage suggestions to enhance patient outcomes.
Study approval
This study was conducted in accordance with the tenets of the Declaration of Helsinki.The Institutional Review Board of Seoul National University Hospital approved the study proposal (approval number: H-2007-083-1141) and waived the requirement for written informed consent due to the retrospective study design.After obtaining approval, we retrospectively collected data from patients who underwent liver transplantation between January 2017 and October 2020.Patients aged < 15 years or those without any record of tacrolimus concentrations were excluded.We followed the recommendations from the article "STROCSS 2021: Strengthening the Reporting of Cohort, Cross-sectional and Case-control Studies in Surgery" 37 .
Data collection
The two times daily doses of tacrolimus up to 14 days postoperatively and whole blood tacrolimus concentration measured by chemiluminescence immunoassay were collected from electronic medical records at the Seoul National University Hospital for model training and internal validation.Additionally, the patient's age, sex, height, weight, Model for End-Stage Liver Disease (MELD) score, type of donor, indication for transplant, other immunosuppresants were recorded.Blood test results for alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin, International Normalized Ratio (INR), serum albumin, serum creatinine, hematocrit were collected daily 38 .
During the study period, the patients were administered an oral dosage of tacrolimus two times daily from the 1st day after liver transplantation.Doses were empirically decided by the attending intensivists based on the patient's weight, laboratory results related to liver and renal functions, and the whole blood tacrolimus concentration measured before taking the morning dose of the medication.Dose control and drug concentration monitoring were repeated until the tacrolimus concentration reached a steady-state concentration in the target range between 8 and 10 ng/mL.
Model development
A machine-learning model was developed to predict the next whole tacrolimus concentration test results based on the history of oral tacrolimus doses, measured whole blood tacrolimus concentrations, time-dependent covariates (weight, ALT, AST, total bilirubin, INR, serum albumin, serum creatinine, hematocrit) of previous n days, and time-independent covariates (age, sex, and height).The dataset comprised the variables for n + 1 consecutive days, the first n days for inputs, and the last day for output.Furthermore, the missing values were imputed using multiple imputations.The concentrations and doses of the tacrolimus before the first administration were substituted with zeros.
A long short-term memory (LSTM) model was developed using the input nodes of the tacrolimus dose, measured tacrolimus concentration, and time-dependent covariates.The LSTM outputs were concatenated with time-independent covariates and entered into the fully connected layer.These structures were inspired by Lee et al. 's study 19 .
Gradient-boosted regression tree (GBRT) and LR models have also been developed for comparison.These models received the same inputs as the final LSTM model based on data from the previous n days.GBRT hyperparameters, such as the number of estimators and maximal depth, were optimized using a similar method.
We employed a one-compartment PK model with first-order absorption developed for patients in the first 2 weeks post-liver transplantation 39 .The PK parameters were adjusted based on the post-transplant stage and the serum albumin, AST, or hematocrit measurements: apparent clearance (CL/F) of 8.93 and 11.0 L/h for AST ≥ 500 and < 500 U/L, respectively, and apparent volume (V/F) of 328 L between 0 and 3 days post-transplantation period.After 4 days, apparent clearance was set to 25.1 L/h for serum albumin of < 2.5 g/dL or hematocrit of < 28% and 17.1 L/h otherwise with an apparent volume of 568 L.
Once the best combination of features and hyperparameters was identified, multiple random sampling was performed to evaluate the models' internal and external validation performance.
Training and validation of the models were performed by the author's program written in Python (version 3.10.5)using the Keras library (version 2.10.0).
We compared the accuracy of the models with all combinations of the abovementioned variables for feature selection.Among the various combinations, the one with the highest performance and fewer variables in the five-fold cross-validation was selected.A grid search was performed to determine the optimal combination of hyperparameters.Possible combinations of the hyperparameters were 8, 16, 32, 64, 128, and 256 for the number of nodes in the LSTM; 8, 16, 32, 64, and 128 for the number of nodes in the fully connected layer; and 2-7 days for the number of days for input.
To enhance the model transparency and reveal the effects of the input features on the next tacrolimus concentration, we applied the Shapley Additive exPlanations (SHAP) algorithm to further visualize the explanation at the feature level using SHAP version 0.39.0 in Python 40 .Briefly, the SHAP summary plot was used to illustrate the strength and the direction of associations between features and tacrolimus concentration.
Internal validation
Multiple random sample validations were conducted.The samples in the derivation cohorts were classified into training (80%) and test (20%) sets using 10 random seeds.Subsequently, the training of the model was repeated using similar methods to estimate the mean performance and 95% confidence interval 41 .The predictive performance was evaluated based on the root-mean-squared error (RMSE), median absolute error (MAE), median performance error (MDPE), and median absolute performance error (MDAPE).The agreement between the predicted and measured tacrolimus concentrations was evaluated for each model.
External validation
For external validation, this study analyzed data from the eICU-CRD dataset, which included over 200,000 intensive care unit stays from 208 hospitals in the United States between 2014 and 2015 21 .The "patient unit stay id" of patients whose admission diagnosis was "liver transplantation" was extracted from the "admission dx" table.Patients aged < 15 years were excluded.Whole blood tacrolimus concentration, ALT, AST, total bilirubin, INR, serum albumin, serum creatinine, and hematocrit measurements (labeled as "lab result offset") were queried from the "lab" table.The tacrolimus doses were retrieved from the "medication" table and aligned with the lab result based on "drug start offset, " "drug stop offset, " and "lab result offset." Cases were excluded when the route of drug administration was sublingual or intravenous instead of oral.Data on age, sex, height, and weight were obtained from the "patient" table.Data with missing drug doses or concentrations were excluded to ensure consistency with the training dataset.The LSTM, GBRT, and LR models predicted tacrolimus concentrations in this dataset to confirm the external validity of the model performance.
Dose recommendation
The model suggested tacrolimus doses by first predicting the tacrolimus concentration for all hypothetical doses between the minimum (0.5 mg) and maximum doses (20 mg).The tacrolimus doses predicted to achieve the target concentration range (8-10 ng/mL) were then identified as the suggested doses.A 3 × 3 contingency table was produced by juxtaposing the administered dose against the suggested doses and the actual measured concentration within the therapeutic range.Subsequently, these frequencies were statistically examined using the chi-square test.We further evaluated whether dose adjustments aligned with the suggested tacrolimus doses were associated with expedited ICU discharges.We compared the duration of ICU stays between patients who received tacrolimus doses within and outside the suggested range.
Clinical outcome
We investigated whether tacrolimus concentrations outside the target range or high intra-patient variability, defined as a standard deviation of tacrolimus concetnration over 2 ng/ml, significantly impacted prognosis during the first 14 days post-transplant 42 .The clinical outcomes evaluated included transplantation rejection, renal failure, and CMV infection.Transplant rejection was assessed by transplant surgeons based on laboratory findings, biopsy results, and imaging examinations 43 .Acute kidney failure was defined as an increase in serum creatinine by 0.3 mg/dL or more within 48 h or an increase to 1.5 to 1.9 times baseline within the previous 7 days 44 .CMV infection was diagnosed using PCR assays 45 .We used the chi-squared test to analyze the association between tacrolimus concentration and clinical outcomes during the early post-transplant period.
Sensitivity analysis
Sensitivity analyses were performed to confirm the robustness of the LSTM model.Specifically, we trained the models without any drug concentration results and evaluated their performance.
Statistical analysis
Formal sample size calculation was not performed because of the inherent nature of retrospective studies.Instead, the study used available data from tertiary hospitals and a large open dataset to develop and test the prediction model.The patient demographics and doses and concentrations of tacrolimus are described as means (± standard deviations) or medians (interquartile ranges), depending on the results of the Shapiro-Wilk test, and the categorical variables are presented numerically (percentages).Continuous variables, such as the doses and concentrations of tacrolimus, age, weight, height, AST, ALT, total bilirubin, INR, serum albumin, serum creatinine, and hematocrit were compared using the Student's t-test or the Mann-Whitney U-test.Categorical variables, such as patient sex, were compared using Pearson's chi-square test.
Model performance was evaluated using internal test and external validation datasets.The RMSE, MAE, MDPE, and MDAPE were compared using analysis of variance, followed by a post-hoc t-test with Bonferroni correction.An MDPE of < 20% or MDAPE of < 30% was determined to be clinically acceptable based on previous studies [22][23][24] .
Statistical analyses were performed using Python and IBM SPSS for Windows, version 21 (IBM, Armonk, NY, USA), and a significant difference was considered at P < 0.05.The code used for the analysis is attached in Supplementary Table S4.
Fig. 1 .
Fig. 1.Structure of the machine-learning model.The input data of the LSTM layer were the doses of tacrolimus, measured tacrolimus concentration, and dynamic covariates (weight, AST, ALT, total bilirubin, INR, albumin, serum creatinine, and hematocrit) for 3 days.The output of the LSTM model, O t , was concatenated with static covariates (age, sex, and height) and subsequently passed through the FNN layer.The predicting variable was the tacrolimus concentration of the next day (Conc t ).The solid arrow presents the progress of learning.AST, aspartate aminotransferase; ALT, alanine transferase; BIL, total bilirubin; INR, international normalzed ratio; ALB, serum albumin; sCr, serum creatinine; HCT, hematocrit; FNN, feed-forward neural network; LSTM, long short-term memory. https://doi.org/10.1038/s41598-024-71032-ywww.nature.com/scientificreports/
Table 1 .
General characteristics of the patients in the training and testing groups.Data are expressed as means (standard deviations) or number (percentages).AST, aspartate aminotransferasel; ALT, alanine transferase; INR, International Normalized Ratio; MELD, Model for End-Stage Liver Disease.
Table S1
and S2.After a grid search for hyperparameter optimization, the combination of 3 days of data as input, 16 nodes in the LSTM, and 32 nodes in the fully connected layer showed the least validation error.Additionally, the validation errors of the models based on the input features and hyperparameters are provided in Supplementary TableS1 and S2.
Table 2 .
Comparison of prediction performance in the internal and external validations.Data are presented as means (standard deviations).All P < 0.001 after 10 random trials, except for comparing RMSE in the internal validation * (P = 0.144).GBRT, gradient boosted regression tree; LSTM, long short-term memory; PK, pharmacokinetic; MAE, median absolute error; MDAPE, median absolute performance error; MDPE, median performance error; RMSE, root mean squared error. | 5,012 | 2024-08-28T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Scanning electron microscopy of Rydberg-excited Bose-Einstein condensates
We report on the realization of high resolution electron microscopy of Rydberg-excited ultracold atomic samples. The implementation of an ultraviolet laser system allows us to excite the atom, with a single-photon transition, to Rydberg states. By using the electron microscopy technique during the Rydberg excitation of the atoms, we observe a giant enhancement in the production of ions. This is due to $l$-changing collisions, which broaden the Rydberg level and therefore increase the excitation rate of Rydberg atoms. Our results pave the way for the high resolution spatial detection of Rydberg atoms in an atomic sample.
Introduction
Because of their unique properties Rydberg atoms are an excellent tool for the investigation of few and many-body quantum systems. The large dipole moment ensures that locally only one excitation is possible, and thus a blockade arises [1,2]. This leads to an excitation which is distributed over all atoms within the blockade volume, forming a collectively excited state [3,4,5]. Rydberg ensembles also strongly interact with light fields, which can be used to entangle atoms with photons for quantum communication purposes [6,7]. It has also been shown that the blockade effect can be used to implement quantum gates between neutral atoms [8]. Another intriguing application is the dressing of ground state atoms with highly excited Rydberg states, giving rise to a tunable interaction potential between the dressed ground state atoms [9,10,11].
Direct Rydberg atom imaging has been demonstrated so far in optical lattices [2], and in ion emission microscopy techniques [1]. Rydberg atom imaging capabilities give direct acces to spatial correlations and are the key component to study new quantum phases [12], tailored spin systems [13] and energy transfer mechanisms [14].
Scanning electron microscopy has proven to be a powerful tool to manipulate and image ultracold atoms in their ground state [15,16,17,18]. Due to the huge electric dipole moment Rydberg atoms are much more sensitive to electric fields and feature huge cross sections for electron scattering. Combining scanning electron microscopy with Rydberg excited atomic samples therefore offers new possibilities to prepare, image, and probe theses systems.
Our experimental apparatus combines a setup for the all-optical production of ultracold atomic samples with a scanning electron microscope for high resolution in-situ imaging and a high power ultra violet (UV) laser setup for one-photon Rydberg excitations. We first describe the apparatus and the experimental sequence that allows us to acquire scanning electron microscope images of Bose-Einstein condensates. Thereafter we describe the UV laser setup which allows the single-photon excitation of Rydberg atoms. In the last section we demonstrate the successful application of the electron microscopy technique on a BEC whose atoms are continuously excited to the 38P 3/2 Rydberg state. The increase of the signal with respect to the non-excited case is far larger than the expected n 2 factor [19], making such technique promising for the high resolution spatial detection of Rydberg atoms in atomic samples. With the help of a simple model we finally show that such giant enhancement is due to l-changing collisions induced by the electron beam itself.
Experimental Setup
Our experimental sequence starts pre-cooling 87 Rb atoms in a two dimensional magneto optical trap (2D MOT), orientated with an angle of 45 • above the horizontal plane. By means of a resonant laser beam along the axis of the 2D MOT we transfer the atoms (1 × 10 9 atoms/s) into the center of the science chamber, where they are collected and further cooled in a 3D MOT (2.5 × 10 9 atoms after 3s). The laser sources for both traps are grating stabilized diode lasers locked on the D 2 -transition line of Rubidium. The magnetic field for the 3D MOT is provided by eight electrodes placed directly inside the vacuum chamber. Each electrode has a section of 5 × 5 mm 2 and is made of oxygen-free copper. As shown in fig. 1 the electrodes are arranged in such a way that they form two effective coils in almost Helmholtz geometry with an inner radius of 15 mm and a distance of 25 mm. The electrodes and the corresponding vacuum feedthroughs can hold up to ±1 kV and 200 A. By setting the potential of the electrodes we can produce arbitrary electric fields (500 V/cm) or electric field gradients (200 V/cm 2 ) in the center of the chamber, which are of fundamental importance for the manipulation of Rydberg atoms. Alternatively magnetic fields (70 G) or magnetic field gradients (20 G/cm) can be produced by a current up to 200 A in the electrodes in Helmholtz or anti-Helmholtz configuration. During the 3D MOT the electrodes are driven in anti-helmholtz configuration with a current of 100 A which results in a magnetic field gradient of 10 G/cm.
The subsequent evaporative cooling stage, necessary to reach the ultra-cold regime, is performed in a red detuned crossed dipole trap at 1064 nm. The trapping light is provided by a fiber amplifier (Nufern, NuAmp fiberlaser) pumped with a low noise diode laser (INNOlight, Mephisto S). The light at the output of the amplifier is divided in three beams, a strong beam with a power of 15 W on the atoms and two weaker beams with 1.4 W. The power of the strong beam is controlled with a free space AOM (Gooch & Housego, R23080-3-1.06-LTD) and is focused on the atoms with a waist of 25 µm in the middle of the science chamber. To control the power of the weak beams we use two fiber-coupled AOMs (Gooch & Housego, FiberQ). The two beams at the output of the fibers are focused to a waist of 80 µm in the center of the science chamber. One of them perpendicularly crosses the strong beam thus realizing a crossed dipole trap scheme. The other one is aligned in order to propagate collinearly with respect to the strong beam.
The transfer of the atoms from the 3D MOT to the crossed dipole trap is done in a dark MOT stage of 100 ms, where we strongly suppress the repumping light to 1/70 of its maximal power and widely detune the cooling light to −195 MHz. The strong and the weak orthogonal trapping beams are set to full power. At the end of the dark MOT stage we end up with ≈ 1.5×10 7 atoms in the crossed dipole trap at a temperature of 250 µK in the |F = 1 hyperfine manifold. After plain evaporation for 50 ms we start the forced evaporative cooling by exponentially lowering the power of the stronger beam while we hold the weak beam constant. This allows us to keep the collisional rate sufficiently high to ramp down the strong power beam with a time constant of In case a BEC in a defined polarized state is needed, we can apply a small magnetic field gradient along the vertical axis during the evaporation. This compensates the effect of gravity for the |m F = +1 state while it enhances the effect of the gravity for the |m F = −1 state. During the evaporation the atoms in the |m F = +1 state always feel a deeper trap allowing us to selectively remove the atoms in the other states thus producing a purely polarized BEC (1.0 × 10 5 atoms).
In order to get an isotropic trapping potential, we switch off the strong beam and switch on the third beam. This is done by sigmoidally ramping it up to the same power as the crossed beam in a time of 100 ms, while the strong beam is completely ramped down in the last 100 ms of the evaporation stage. With this technique we end up with 2.5 · 10 5 atoms in a trapping potential with the frequencies ω x /ω y /ω z = 2π × 80/80/94 MHz.
The scanning electron microscope
On the upper part of our vacuum chamber we have installed a UHV-compatible scanning electron microscope (Delong instruments, ECLEX III). It consists of an emitter, placed on the upper end, which provides an electron beam (EB) with an energy of 6 keV, and of two magnetic lenses. The current and the diameter of the EB can be set by different apertures, which can be inserted into the beam inside the column, and by the current of the first magnetic lens (gun lens). The second magnetic lens (objective lens) is used to focus the beam onto the sample. For beam shaping . . . and guiding, three electric octopoles and one magnetic deflector are installed. The first electric octopole (located at the gun lens) and the magnetic deflector (located at the objective lens) are used to guide the beam through the optics and to correct for aberrations. The other two electric octopoles, installed at the very end of the column, are instead used to deflect the EB. The first, faster one (200 MHz), is used to move the beam along a scanning pattern, while the second, slower, is used to center the scan pattern on the atoms. Since it is possible to change the voltage of the octopoles in both directions independently, arbitrary scan patterns can be realized.
The cone of the electron column is placed inside the vacuum chamber and has a distance of 11 mm to the geometric center of the experiment (see fig 1). The EB at the output of the electron microscope has a transverse 2D gaussian envelope. It is focused to the middle of the chamber, where it has a width (FWHM) of 170 nm with a beam current of 20 nA and a depth of focus of 35 µm. The beam is finally stopped in a Faraday cup placed at the bottom of the vacuum chamber.
A high resolution electron-microscopy image of the trapped atomic cloud is obtained exploiting the ionization of the atoms produced by the electron-atom collisions [15]. Each time an electron collides with an atom there is 40% of probability to ionize it [18]. The ions that are produced by electron-impact are guided by dedicated ion-optics to an ion detector (dynode multiplier, ETP 14553) placed at the bottom of the vacuum chamber. Synchronizing the arrival times of the ions with the scanning pattern of the EB we can reconstruct the in-situ profile of the atomic density distribution [15]. In fig. 2 we show scanning electron microscopy images of ultracold samples across the BEC transition together with the integrated density profiles. Due to the high resolution of 170 nm it is possible to observe the sample of atoms directly in the trap, which allows us to see the thermal fraction of the atomic sample at temperatures, where the time of flight imaging shows a pure BEC.
Time resolved scanning electron microscopy
Scanning electron microscopy has a sequential image formation process. This can be used to perform in vivo studies of the time evolution of a quantum gas. For instance, when the EB is
Time pointed at a fixed position in an atomic sample, the time evolution of the local atomic density can be examined. This can be directly used to measure the trapping frequencies very efficiently. By initially displacing one of the three trapping beams against the center of the optical dipole trap, we induce an oscillation of the BEC after abruptly switching off displaced beam. This oscillation can be directly monitored by the EB, which is aligned to the center of the trap and creates ions whenever the oscillating BEC passes underneath it.
In fig. 3a we show the corresponding ion signal. The oscillation is clearly visible and the Fourier transform of the signal immediately gives the trapping frequency. In comparison to other techniques only a few experimental cycles are needed to determine the trapping frequencies.
Alternatively, one can continuously perform a rectangular scan of the sample, thus visualizing the oscillations in space ( fig. 3b).
Spectroscopy of the single photon transition
To excite rubidium ground state atoms into Rydberg states, usually a two or three photon transition is used [20,7,21]. In our setup we employ a one-photon transition, which has the advantage that no intermediate state is populated. Furthermore the excitation in nonisotropic P-states can be used to realize direction dependent interactions [22]. In the case of Rubidium the transition wavelength to Rydberg states is around 297 nm. The implementation of a powerful and tunable laser source at such wavelength is technically challenging. Our approach consists in frequency-doubling the light at 594 nm produced by a dye laser (Sirah, Matisse DS). Such laser, when pumped with 15 W of 532 nm light (Spectra Physics, Millennia), is able to provide up to 2.5 W laser power. The output laser beam is coupled to a photonic crystal fiber and sent to a bow-tie cavity kept under vacuum. The non linear element that we employ for the frequencydoubling is a cesium-lithium-borate (CLBO) crystal, chosen for its high damage threshold. The measured finesse of the cavity is 370 and the conversion efficiency is 50%, allowing us to obtain up to 700 mW of UV light with a relative RMS power fluctuation of 4 %. The stabilization of the cavity length is realized by using a Pound-Drever-Hall scheme. After the cavity the UV laser is power stabilized by an AOM (AA optoelectronic, MQ200-A 1,5-266.300) and guided to the chamber, where it is focused on the atoms with a typical waist of 100 µm.
An important feature of the UV laser setup is the wide tunability. We are able to excite every . . Rydberg level between n = 30 and the ionization threshold. This is provided by the extremely large tuning range ensured by the dye laser. Its frequency is stabilized using an external cavity whose length is set by using an auxiliary diode laser at 780 nm. The frequency of this auxiliary laser is offset locked with respect to the cooling line of Rb. This allows us to tune the frequency of the UV laser in a range of up to 2 GHz without changing the locking point.
When atoms are excited to Rydberg states, several ionization channels are possible. In a dense ultra cold cloud they can evolve into ions via dipole-dipole interaction [23] or photo ionization due to the trapping laser [24] or black body radiation [25]. In the case of photo ionization the amount of ions is proportional to the excitation probability to the Rydberg state and is a direct measure of the excitation rate. In fig. 4 such an ion signal is shown as a function of the excitation laser detuning from the 5S 1/2 (F = 1) → 51P transition. We have applied a magnetic field of 34 G to separate the individual m j states. The resonance was measured in a dilute gas with a peak density of n = 4.2 × 10 11 cm −3 and a temperature of 1.5 µK. The spectrum illustrates the large mode-hop free tuning range of the UV laser setup and the small laser linewidth of less than 700 kHz.
Electron-microscopy of a Rydberg excited BEC
In this section we investigate the effect of the single-photon Rydberg excitation on the performances of the electron microscopy. To do so we perform continuous line scans over a BEC with the electron beam alone and compare this with a line scan where we additionally shine the UV laser on the atomic sample. The UV beam is blue-detuned with respect to the 5S 1/2 → 38P 3/2 transition. This is necessary to ensure a long enough lifetime and to avoid the excitation of molecular states on the red-detuned side [20,26]. In this configuration three different single-atom ionization channels are possible and they are illustrated in fig. 5a: the direct electron impact ionization of ground state atoms, the ionization caused by the UV laser alone and the ionization stemming from the combined action of the Rydberg excitation and of the EB. We define the corresponding single-atom ion rates as Γ EB , Γ U V and Γ lc respectively. The effect of the single processes can be identified analyzing the line-scans reported in fig. 5b, where the UV light detuning is ∆ = 2π × 10 MHz. The tilde on the symbols indicates that the measured ion rates are resulting from all the atoms present in the respective interaction volumes. The action of the UV light leads to a constant offset in the line scan of the Rydberg excited sample (N tot = 150k atoms), as shown in fig. 5b, and the single-atom ion rate is calculated to be Γ U V =Γ U V /N tot = 0.3 Hz. When the EB hits the cloud and the UV beam illuminates the sample, an additional ion rateΓ lc is measured. In order to quantify and characterize this effect, in the following we analyze the ion rates measured when the EB illuminates the center of the cloud. The atoms ionized directly or indirectly by the EB are only those present in the small volume defined by the intersection of the atomic distribution and the EB itself. We calculate the number of atoms involved by integrating the atomic density over the volume of the EB, which we approximate by a cylinder with a radius of 170nm. In the center of the atomic distribution we find N EB 360 atoms. In such position the single-atom ion rate is Γ lc =Γ lc /N EB = 2.7 kHz while Γ EB =Γ EB /N EB = 2.6 kHz. Thus we observe that the ion rate for a single atom under off-resonant illumination with a UV laser can be enhanced by 4 orders of magnitude if an electron beam is shone simultaneously on the atoms (Γ lc /Γ U V = 0.9 × 10 4 ). In addition, with respect to the electron impact ionization of the ground-state we observe an enhancement of a factor of (Γ lc + Γ EB )/Γ EB = 2.
We now show that the observed giant enhancement can be explained by l-changing collisions in nearby Rydberg states. In this process the incoming electrons inelastically collide with Rydberg excited atoms and change the n and l quantum number. The scattering cross section for this Figure 5. a) Schematics of the three ionization channels for a single atom. The first process (effective ion rate Γ EB , green) is direct electron impact ionization of ground state atoms with ionization rate γ EB . The second ionization channel (effective ion rate Γ U V , blue) is a two step process, consisting of an optical excitation with Rabi frequency Ω followed by an ionization process with rate γ opt . The third ionization channel (effective ion rate Γ lc , red) is a three step process, which consists of an optical excitation, an l-changing collision (rate γ lc ) and electron impact ionization of the Rydberg states (rate γ EB ). b) Ion rates during an electron beam line scan (waist w = 170 (20) nm, current I = 25(1) nA) over a BEC with N tot = 150k atoms. The red (black) dots show the signal with (without) the simultaneous irradiation of the UV laser (I = 7 mW, w = 700 µm, ∆ = 2π × 10 MHz). The given rates correspond to the processes described in a) -the tilde indicates that they denote the integrated signal coming from all atoms which contribute to the signal. The ion rates are calculated with time bins of 10 µs corresponding to a position bin of 300 nm. event is given by [27] σ n,l = πa 2 where a 0 is the Bohr radius, m e the electron mass, M the mass of the atoms, Z the charge of the nucleus, E the collision energy and R the ionization energy. The first coefficient A(nl) is defined as [27] A(nl) = 1 3 which only depends on the quantum numbers n and l. The second constant B(nl) can be estimated, at least for high n states, as an average over all the l states: B(n)/A(n) = a ln(n) + b with a ≈ 1.84 and b ≈ 2.15.
With an EB energy of 6 keV we can calculate the l-changing cross section to be σ l (n = 38, l = 1) = 8.7 × 10 −13 cm 2 . The resulting scattering rate into nearby Rydberg states is then given by where e is the electron charge and j = 27.5 A/cm 2 the current density of the EB. We obtain a scattering rate of γ lc = 150 MHz. This is much larger than every other decay or transition rate of the atom and therefore dominates the dynamics of the excitation process. For nearby states the l-changing scattering cross section is of the same order of magnitude so that excited atoms are frequently scattered between highly excited states due to the electron bombardment. Note that this inelastic scattering cross section is two orders of magnitude larger than the electron impact ionization cross section for the Rydberg state, which can be estimated to be around σ ion = 4 × 10 −15 cm 2 [28,19]. Once an atom is excited into a Rydberg state, it is several hundred times scattered in neighboring Rydberg states, before it is eventually ionized. As the electron impact ionization γ EB is by itself larger than the internal decay rates of the Rydberg state into low-lying states, we conclude that every atom which is excited to a Rydberg state is also ionized. For the calculation of the ion rate per atom in dependence on the detuning we use the stationary solution of the optical Bloch equation with a Rabi frequency of Ω = 2π × 90 kHz. We set the linewidth equal to γ lc as this is the dominating line broadening mechanism. The ion rate per atom for illumination with the UV beam alone is also given by the optical Bloch equation with a line width of γ w ∆ ( fig. 4). The value of γ opt = 50(10) kHz is retrieved from a fit to the experimental data and summarizes all processes which transform the Rydberg excitation into ions.
In fig. 6 we summarize our findings. Indeed, the difference of four orders of magnitude between the two ionization processes with and without electron beam is well reflected by the theoretical treatment. It shows that electron impact scattering can be a powerful tool to enhance the excitation into Rydberg states. This can be used to create dissipative hot spots in Rydberg gases. In addition, it can be also employed to enhance the detection efficiency of scanning electron microscopy of ultracold quantum gases. Figure 6.
The ion rate per atom Γ lc (red dots) and Γ U V (blue dots) for different detunings in a logarithmic representation.
The red (blue) line is the theoretical prediction of Eq. 4 (Eq. 5)
Outlook
We have reported on the first experimental realization of electron microscopy of a Rydbergexcited atomic cloud. The experimental techniques employed allow us to create ultracold quantum gases under the emission tip of an electron microscope with cycle times of down to 4 s, making our approach suitable for high statistical measurements. The electron microscope is a powerful tool for the manipulation and detection of atomic samples. Through the depletion of neighboring sites in an optical lattice it is possible to prepare very small samples to investigate isolated superatoms [29,9]. It is also possible to make in situ and in vivo measurements of an ultra cold sample. Our setup is equipped with a high-power UV laser source which allows us to drive single-photon transitions to Rydberg states. We have demonstrated that performing electron microscopy on samples coupled to Rydberg states greatly enhances the detection probability with respect to the non-excited case. Such an enhancement is even more pronounced than what is expected from the simple increase of the electron impact cross section of Rydberg atoms. We demonstrate that the giant enhancement is explained by l-changing collisions which take place when the atoms are bombarded by electrons and UV photons. For a pulsed electron beam, an extension to directly imaging the distribution of Rydberg atoms in an atomic gas is an intriguing vision.
We thank the DFG for the financial support within the SFB/ TRR 49. V.G and G.B. were supported by Marie Curie Intra-European Fellowships. | 5,667.2 | 2014-03-07T00:00:00.000 | [
"Physics"
] |
Simulating dark-field X-ray microscopy images with wavefront propagation techniques
The simulation of a dark-field X-ray microscopy experiment using wavefront propagation techniques and numerical integration of the Takagi–Taupin equations is shown. The approach is validated by comparing with measurements of a near-perfect diamond crystal containing a single stacking-fault defect.
Introduction
Dark-field X-ray microscopy (Simons et al., 2015) (DFXM) is a full-field X-ray imaging technique similar to X-ray topography (Berg, 1931;Lang, 1957). However, unlike the latter, DFXM utilizes an X-ray objective lens placed between the sample and the detector to create a magnified image and can therefore achieve a spatial resolution better than the detector pixel size. The spatial bandwidth is limited by the small numerical aperture (NA ' 10 À3 ) of this objective lens. Compared with classical X-ray topography, this provides angular resolution of the scattered beam direction (Poulsen et al., 2017), which makes it possible to quantitatively measure strains and rotations of the crystal lattice by sequentially collecting images while rotating the sample and moving the objective lens.
Traditionally, the quantitative analysis of DFXM data and the theoretical description (Poulsen et al., 2017 of the method have relied on strong approximations. Most important is the kinematical approximation, which is to omit multiple scattering effects, and which holds for small and for highly deformed crystals. Another approximation, which is built into the geometric optics treatment of Bragg scattering, is that infinitesimal sub-volumes in the sample scatter according to the Bragg law for a perfect infinite crystal, and that the intensities scattered from such sub-volumes add together incoherently. In some cases (e.g. for near-perfect crystallites and small defect structures), however, such approximations are not valid.
Here we present a method for simulating DFXM images based on wavefront propagation combined with a framework that treats multiple scattering events (known as the dynamical theory of X-ray diffraction) (Authier, 2001). This method is able to handle the effects of coherence, dynamical scattering, aberrations of the objective lens and detector imperfections, and as such it is a more realistic model of DFXM for nearperfect crystals than the geometric optics model. This will be useful for understanding the type of contrast observed in DFXM images and can aid in experimental planning and data analysis.
Furthermore, a number of advanced approaches to DFXM have been suggested in the literature and tested by various authors [e.g. magnified topo-tomography , confocal Bragg microscopy and tele-ptychography (Pedersen et al., 2020), Fourier ptychographic DFXM (Carlsen et al., 2022)]. To date, the theoretical models used in these examples rely on idealized models of the underlying physics. Again, a more realistic forward model would be useful for testing the limits of their applicability.
In this paper we discuss the steps that are required for such a simulation and present an implementation and some initial results. We compare simulated and experimental data for a near-perfect single-crystal diamond containing only one single stacking fault in the imaged field of view (FOV).
Method
Recent developments in DFXM are moving towards studying the dynamics of isolated defects, such as dislocations, domain walls and acoustic waves in near-perfect single crystals Holstad et al., 2021). Here, dynamical effects cannot be ignored. In these cases, the analysis of DFXM data has often relied on the weak beam approximation, where it is assumed that even highly perfect crystals scatter approximately kinematically when the sample is rotated to the tails of the rocking curve. At this position, the bulk of the crystal does not scatter strongly. Instead, only small strained volumes near defects and surfaces contribute to the scattered intensity.
Much has been published on the subject of simulating X-ray topography images (Taupin, 1964;Authier, 1968;Epelboin & Soyer, 1985). These methods are applicable in our case, but a few extra precautions must be taken due to our need for high quantitative accuracy, and the added complication caused by the objective lens. There exist useful solutions (Chubar et al., 2013;Pedersen et al., 2018;Celestre et al., 2020) for simulating synchrotron sources and X-ray optical components of the kinds applied in a DFXM instrument with coherent wavefront methods. A full simulation based on coherent wavefronts is therefore within reach by combining these established methods.
In dark-field transmission electron microscopy (DFTEM), simulating images with dynamical diffraction effects is done routinely. Although superficially similar to DFXM, the methods cannot be transferred from one technique to the other. The most common methods in DFTEM are either Bloch-wave methods that rely on the column approximation which cannot be applied to DFXM where the scattering angles often exceed 10 , or multi-slice methods that require the atomic structure of the sample to be well sampled, which is not feasible for DFXM where sample sizes often exceed 100 mm (Kirkland, 1998).
X-ray topography methods, on the other hand, rely on the two-beam approximation (Taupin, 1964), which cannot be applied to electron microscopy where potentially hundreds of reflections contribute significantly to the images.
Here we apply the formalism of the Takagi-Taupin equations, which make use of the two-beam approximation to simplify the scattering problem. This allows us to treat Bragg scattering in an averaged way, removing the requirement to over-sample the unit cell. Instead, the sampling is limited by the divergence of the incident X-rays and the size of features in the sample. Propagation from the sample to the detector is handled by established paraxial Fourier optics methods.
We split the simulation flow into five steps as follows: (i) Beam: calculate the amplitudes of the modulated waves of the beam incident on the sample.
(ii) Sample: calculate the complex scattering function throughout the sample crystal.
(iii) Integration: numerically integrate the Takagi-Taupin equations to get the complex amplitudes of the scattered beam on the exit surface of the sample.
(iv) Propagation: propagate the scattered complex amplitudes through the imaging optics to the detector.
(v) Detector: interpolate the propagated field to the detector pixels and account for detector characteristics.
A schematic drawing of the steps and flow of the simulation is shown in Fig. 1.
Defining the beam
We work in a paraxial wave-optics formalism and use a coherent mode decomposition to describe the state of the X-ray beam at a given plane. This is given by a number of modes, labeled with p: E (p) (x, y; z), where E is the amplitude function of a modulated plane wave: E ¼ <½EðrÞ expðik 0 Á r À i!tÞp p, where r = (x, y, z) is the spatial position, k 0 is the modes' wavevector, h -! is the photon energy andp p is the polarization vector.
If a mode is known on a plane z = 0, then the mode's amplitude on another plane can be found by applying a (linear) coherent wavefront propagator: E ðpÞ ðx; y; zÞ ¼ P 0!z ½E ðpÞ ðx; y; 0Þ: We assume that the radiation is beam-like, i.e. that it has a limited extent in two dimensions in real space and in all dimensions in reciprocal space. The reciprocal-space distribution is around a central wavevector k 0 , which defines the nominal direction and wavelength of the incident beam.
Defining the sample
In the Takagi-Taupin (Takagi, 1962(Takagi, , 1969Taupin, 1964) approach to X-ray scattering in the two-beam approximation, the only quantities of interest are the average electric susceptibility of the crystal, 0 , and the two scattering constants h and h that describe the cross section for scattering and back-scattering, respectively. These may be spatially modified by a displacement field u(r) of the strained crystal: Here Q is the reciprocal-lattice vector of the given reflection in an undeformed reference lattice. This dependence on the displacement field explains the high sensitivity to small strains. h and h can often be considered constants, but in samples with twin boundaries there may be discontinuous jumps in the values of this function, which explains contrast observed at presumably strain-free inversion twin domain boundaries in polar materials (Klapper, 1987).
In this paper we focus on slab-shaped single crystals, i.e. crystals with two parallel surfaces and infinite size in the orthogonal directions (see Fig. 2). The normal of the exit surface is calledn n.
We utilize a discrete representation of the sample structure on an orthogonal grid defined by the three directionsx x,ŷ y and z z ¼n n, and corresponding step sizes d x , d y and d z . It is necessary that the surface normal of the crystal slab is parallel to thê z z axis, but we make no requirement on the last free rotation. The number of grid points in each direction will be labeled N x , N y and N z , giving a total size of the simulated domain of L x = d x (N x À 1), L y = d y (N y À 1), L = d z (N z À 1). L is the thickness of the simulated crystal slab.
The complex value of these scattering functions needs to be known with high resolution. For simple test cases, where the displacement field is given by an analytical expression, this is not a point of concern. If, however, the displacement field is generated by a numerical simulation, then it needs to be computed with sufficient resolution to at least match the resolution of the experiment ('50 nm) throughout the volume of the sample. The size of the sample can be up to several hundred micrometres.
For highly deformed samples, the scattering function contains a phase factor of the shape exp(iQ Á u). In order to limit the phase variation between adjacent voxels to less than 2, step sizes must be below |ru|/(2|Q|), which means small steps must be used for highly deformed samples. In samples with nanometre-sized domains or other small structures, all structural features must be resolved.
Integrating the Takagi-Taupin equations
Scattering of X-rays by a deformed crystal is treated in the formalism of the Takagi-Taupin equations (TTEs). In formulating the TTEs, there is an arbitrary choice of the vector k 0 , which leads to slightly different versions of the final equations. The ones chosen here are referred to as the symmetrical TTEs (Vartanyants & Robinson, 2001) that arise when k 0 is taken to be the vacuum wavevector of the incident beam as opposed to the more conventional choice of the refracted wavevector used in the original publications: where ¼ 2 sinð2Þ is a measure of the misorientation away from the vacuum Bragg condition, is the rocking angle and C is the polarization factor. E ðpÞ 0 are the modes of the plane-wave decomposition of the incident beam and E ðpÞ p are the corresponding modes of the scattered beam, which is given relative to the wavevector k h = k 0 + Q.
The TTEs are typically solved using a finite-difference integration scheme. A number of different algorithms have been used in the literature with the pioneering work done by Taupin (1964), Authier (1968), Epelboin & Soyer (1985). In this paper we use a novel, more flexible scheme which is documented by . In contrast to other established methods, this method is able to operate on an orthogonal grid representation of the sample structure, which is achieved by implicitly utilizing Fourier interpolation. This, however, causes a less efficient use of computer resources.
The geometry of the problem is set by the shape of the incident beam, the vectors k 0 and Q, as well as the choice of a computational grid. The plane spanned by the two vectors k 0 and Q is called the scattering plane.
When the scattering plane is normal to theŷ y axis, the TTEs decouple into a set of 2D problems that can be solved slice-byslice. If we are free to choose the orientation of the computational grid, we can always choose a geometry where this becomes the case.
When Q is parallel to the surface of the crystal, we say that we have a symmetric Laue geometry. In this case we can choose an orientation of the computational grid where Qjjx x, where the TTEs take a particularly simple form.
In order to solve the TTEs, we impose zero Dirichlet boundary conditions in the two transverse dimensions, x and y. These boundary conditions require the sample grid to be large enough to fit the entire Borrmann triangle extending from every point where the incident beam amplitude is non-zero. This is fulfilled if the non-zero part of the amplitude is fully contained in the rectangle defined by (see Fig. 2 where L x , L y are the lengths of the simulated domain in the x and y directions, and L is the thickness of the crystal. For a given incident beam and Q, this sets a minimum on the size of the simulated domain in the two transverse directions. With the sample structure and the boundary conditions given, the TTEs constitute an initial value problem, where the initial value is the amplitude of the incident X-rays on the z = 0 surface. This can be solved by an appropriate finite-difference scheme to yield the amplitudes of both the transmitted and scattered beams on the exit surface z = L.
Propagating through the optics
In DFXM, the scattered beam at the exit surface of the sample is imaged onto a detector using an objective lens in a magnifying geometry. The propagation through the lens is challenging to simulate due to the inherent thick-lens behavior of the compound refractive lens (CRL) which is typically used as the objective lens. For this, we use a computational approach where each lens in the CRL is treated as a thin lens and a paraxial FFT (fast Fourier transform) propagator is used to propagate the wavefront between each lens.
To do this, we make use of a method for propagating wavefronts with rapidly varying quadratic phases that come from the transmission function of the lenses that treats the quadratic component analytically (Ozaktas et al., 1996;Chubar & Celestre, 2019). To simulate an aberrated CRL, a separate aberration function is included at each lenslet in the CRL.
These methods have previously been used to model thick CRLs like the one used in this study (Pedersen et al., 2018;Celestre et al., 2020).
It is useful to introduce a new optical axis (ẑ z i ) aligned with the average wavevector of the scattered beam, k h . To bridge the gap between these two coordinate systems, we project the values from the exit surface of the crystal to the z = 0 plane of the new coordinate system.
The inherent near-field nature of the imaging geometry, which is determined by the fact that the FOV is as large as the aperture of the objective lens, is handled by first multiplying the field with the near-field phase factors: The need to over-sample this function can effectively limit the size of the FOV that is possible to simulate for a given pixel size.
Detector characteristics
With the mode amplitudes on the detector plane given, we now need to interpolate these values to the detector pixels and incoherently sum over the modes of the incident beam.
If the purpose of the simulation is to estimate the resolution or to create a data set to be used to test the data analysis procedures, one should remember to include non-ideal behavior introduced at the detector. The most important effects are the incoherent point spread of the detector, background signal, counting noise and non-linear response.
The detector used here is an indirect detector composed of a scintillator crystal (25 mm-thick gadolinium gallium garnet), optical microscope (Mitutoyu M Plan Apo 10Â, NA = 0.28 and tube lens) and pco.edge 5.5 sCMOS camera with pixel size 6.5 mm. Contributions to the incoherent point spread function arise from the diffraction limit due to the finite NA of the optical microscope, the finite thickness of the scintillator (especially when it is larger than the depth-of-focus of the optical microscope), scattering within the scintillator and the optical microscope, and aberrations.
Counting statistics/readout noise can be the critical factor when imaging small grains or weak reflections. The non-linear response (saturation) might be quite important for perfect crystals, as the images have interesting features over a very large dynamic range.
Comparison with experiment
To test our approach, we simulate a section-topography-type experiment with a near-perfect single-crystal diamond slab of thickness 300 mm containing a single stacking-fault defect. The sample is imaged in a symmetric Laue geometry in a {111} reflection with [110] entrance and exit surfaces.
The investigated defect is a stacking fault, which arises by the addition or removal of a single close-packed plane of atoms in the face-centered cubic (f.c.c.) parent lattice of the diamond. The fault vector is of the family b s:f: ¼ ð1=3Þh111i which is not a translational symmetry of the f.c.c. lattice. These planar defects are bounded by the surfaces of the crystal and Frank-type partial dislocations (Frank, 1951;Kowalski et al., 1989). In the described experiment, the edges of this defect lie outside the Borrmann triangle and the effects of the strain originating at the edges can be ignored. This allows us to treat the stacking fault as an infinite planar defect [on one side u(r) = 0, on the other u(r) = b s.f. ]. In the Takagi-Taupin description, the stacking fault thus becomes a discrete jump in the phase of the scattering function of magnitude 2[hk'] Â b s.f. = AE2/3 when imaged using a [111] reflection not orthogonal to the stacking-fault normal (Klapper, 1987). A sketch of the sample geometry is shown in Fig. 3.
This constitutes a good test sample, as the defect (aside from the 12 possible orientations of the defect) only has a single continuous degree of freedom: the position of the stacking fault along the normal. Furthermore, diamond is one of a few materials where macroscopic crystals with very low defect density are available. Provided our method captures all the relevant physics, we should therefore be able to perfectly recreate the experimental data.
The dynamical scattering patterns produced by isolated stacking faults in diamonds have previously been studied in detail by classical X-ray topography (Kowalski et al., 1989).
Experiments were carried out at the ESRF dark-field X-ray microscopy beamline, ID06-HXM (Kutsal et al., 2019), after the EBS (Extremely Brilliant Source) upgrade. A Si (111) Bragg-Bragg double-crystal monochromator selected X-rays with photon energy 17 keV from the undulator source. The spot size of the beam on the condenser lens is limited by a slit in the vertical direction to 0.2 mm. Based on parameters of the X-ray source given by the ESRF, we estimate a vertical coherence length of 340 mm at the distance of 50 m from the source to the condenser lens, which is significantly larger than the aperture of the condenser lens. The horizontal coherence length is estimated to be 40 mm, which is smaller than the FOV but much larger than the extent of the coherent point spread function of the objective lens. The objective lens consists of 70 individual biconcave Be lenses of apex curvature 50 mm. The CRL is further modified with a 100 mm square aperture after the last lens To describe the incident X-rays, we use only one coherent mode for each wavelength and 41 different wavelengths covering a relative energy spread of 6 Â 10 À4 in total. For each energy component, the amplitude at the sample position is the Fourier transform of the complex transmission function of an aberration-free 1D condenser lens with a Gaussian-shaped absorbing aperture and a hard cut-off at 140 mrad. Consequently, the angular spectrum in the transverse direction is a top-hat profile with a small Gaussian smoothing at the edges, whereas the spatial profile is a diffraction-limited focal spot with some ringing due to the hard cut-off (see Fig. 4). We omit the use of several transverse mutually incoherent modes. The effects of including these would be small, since the effective source size is small compared with the NA of the objective lens.
In the experimental realization we make the observations in a laboratory frame defined byx x lab ,ŷ y lab andẑ z lab , whereẑ z is parallel to k 0 . In this experiment the scattering is vertical in the laboratory frame, meaning thatŷ y lab is normal to the scattering plane. We choose the integration grid such thatŷ y ¼ŷ y lab .
The simulation used a grid of 2048 Â 2048 Â 3001 points with step sizes of 60, 60 and 100 nm, respectively. This matches the 300 mm thickness of the sample and the '100 mm FOV in the y direction. The large size in the x direction was necessary to satisfy the constraints of equation (5). Step sizes in x and y directions are chosen to be larger than the resolution of the final images, and the step size in z is chosen such that the integration error of the finite-difference scheme is lower than the noise level of the final images. The execution time is dominated by the integration of the TTEs, which took 3.5 h per energy point running on a single core. The simulation was parallelized over the modes.
The computation time was dominated by the integration of the TTEs, which involved '10 000 1D Fourier transforms of the 2048 Â 2048 arrays used in this example compared with just '150 2D Fourier transforms for the simulation of the optics. The use of computer resources scales linearly with the number of modes, so if a transversely incoherent model was to be used, the needed resources would increase by a factor of the number of modes. In the study presented here computa- Vertical profile in (a) reciprocal and (b) real space of the beam used in the simulation of this paper. tion time was not an issue, but if high-throughput simulation was needed, large performance gains could likely be achieved by applying a more efficient algorithm for the integration of the TTEs (Epelboin & Soyer, 1985). The integration of the TTEs can also be parallelized into separate 2D problems for each vertical slice of the sample and should easily scale with available computer resources. If the thick-lens effects are not critically important, the CRL may well be approximated by a single convolution step (Pedersen et al., 2018). Furthermore, the large number of energy modes used in this example was only necessary to ensure proper sampling of the highfrequency features used as lens aberration and avoid aliasing problems in Fig. 9. If these are not important, a much smaller number of energy points would be sufficient.
Near-field measurements
The microscope used for the experiment provides the possibility to additionally carry out traditional X-ray topography by placing a detector closely behind the sample (40 mm in the examples shown here) without using the objective lens. This is useful for alignment and for lowresolution characterization of the sample.
For a single coherent mode [ Fig. 5(b)] this propagation distance causes recognizable Fresnel-diffraction fringes around sharp features in the scattered field. In a polychromatic simulation [Fig. 5(c)], these fringes are blurred out. The difference in wavelength (which causes a small difference in the free-space propagator) is not sufficient to explain this blurring. Rather, the broadening is caused by the slight difference in scattering angle of the different energy components in the incident beam [ Fig. 5(d)]. The features in the simulated image, however, are not as wide as in the measured data [ Fig. 5(a)]. This is likely explained by the incoherent point spread of the detector.
The vertical divergence of the condensed line beam is large compared with the intrinsic width of the dynamical rocking curve of the diamond sample. The crystal therefore acts as an analyzer when rocked in the condensed line beam (see Fig. 6). The width of the rocking curve is largely determined by the divergence of the incident beam, but the finite-energy bandwidth blurs the sharp cut-off caused by the condenser lens' physical aperture. The asymmetry of the measured rocking curve [ Fig. 6(c)] reveals a misalignment of the condenser lens.
High-frequency defects in the condenser lens are visible in the spatially resolved rocking curve of perfect parts of the crystal [see Fig. 6(b)] as vertical stripes. The stripes are blurred along the rocking-angle direction due to the finite bandwidth of the incident X-rays -an effect which is confirmed by the simulations shown in Fig. 6(a).
When the crystal is rocked, a different part of the spectrum of the incident beam will be in the Bragg condition and therefore the sample will scatter in slightly different directions as a function of the rocking angle [sketched in Fig. 6(d)]. Due to the finite propagation distance from the sample to the detector, this change in angle translates to a change in position of the measured image. This mix of position and angular information, which is avoided by the use of an objective lens, is unavoidable in X-ray topography methods due to the finite propagation length, but it is exaggerated in this study due to the relatively large propagation distance and large vertical divergence compared with more usual topography experiments.
DFXM images
To simulate the DFXM images, we use a model of an ideal CRL. The physical aperture of the CRL is defined by a 0.1 Â 0.1 mm square absorbing slit placed at the exit of the lens. The only fitted parameters for the whole simulation are the relative positions of the sample, lens and detector as well as the noise level of the detector. The intensity of the simulated images is scaled to match the intensity of the measured images. The same scaling parameter is used for all images. The experimental images are overexposed (saturated) at the direct image of the stacking fault; therefore we here choose a color map that clips the highest intensities in the simulated images.
As can be seen in Fig. 7, the DFXM simulation qualitatively recreates the features of the experimental images. However, there are a number of deviations: (i) We underestimate the magnification of the imaging setup by about 5%, which leads to an incorrect scaling of the images. This is most likely due to a small deviation of the apex radius of curvature from the nominal value of 50 mm in the individual lenses of the CRL used as objective lens.
(ii) The simulated images contain a smaller number of Borrmann fringes (the horizontal stripe features) than the measured data. We attribute this to the known high sensitivity of the spacing of Borrmann fringes to small macroscopic strain gradients (Rodriguez-Fernandez et al., 2021). Alterntively, a slight miscalibration of the photon energy or sample thickness could also lead to this difference.
(iii) The simulated images have a regular pattern of vertical streaks close to the right-hand side of the images. These are due to Fresnel diffraction from the hard edge of the square aperture in the objective lens. This is likely an artifact of the assumption of perfect transverse coherence in the horizontal direction or of the perfectly sharp edges of the aperture that are somewhat jagged in practice.
(iv) The measured images contain noise with the appearance of vertical streaks and speckle-like features close to the brightest features. This can be explained either by the aberrations in the condenser lens or in the objective lens, as will be discussed later.
In Figs. 7(c), 7(d) we simulate an image where the objective lens is displaced from the center of the scattered beam such that rays that are specularly reflected fall outside the aperture of the objective lens in the bottom part of the displayed ROI (region of interest). In that region, only diffusely scattered X-rays will contribute to the image. This results in the disappearance of the dynamical features, while the direct image of the stacking fault can still be seen. In visible light microscopy, this is referred to as 'dark-field contrast'.
In the geometric optics treatment, weak beam contrast is explained by the presence of small regions where the lattice is strained and rotated away from the average lattice. In these regions, rays are scattered if they satisfy the exact Bragg condition for the deformed lattice. A stacking fault is in principle a perfectly sharp defect with no spatial extent so no such region exists. The appearance of weak beam contrast therefore illustrates the inability of the geometric model to handle diffraction effects that are important when describing scattering from small structures with a characteristic size on the order of the wavelength or smaller. Since the stacking fault is thought to be a perfectly sharp defect, the width of the image of this defect can serve as a rough estimate of the resolution of the instrument. The stacking fault is a 2D feature, and therefore the width of the image is not only determined by the resolution of the imaging optics, but also by the projection of the part of the stacking fault illuminated by the sheet beam along the scattered beam direction.
In Fig. 8 (inset) we compare the width of this feature in the experimental images with that in the simulated images. We see that the polychromaticity does not contribute significantly to the width of the feature in the simulations. Previous studies of the chromatic aberrations in CRLs, using the same computational approach as we apply here, also find that the chromatic aberration only adds a small part to the point spread of the imaging optics (Pedersen et al., 2018).
The experimental image has an FWHM of 1.3 mm, about 0.5 mm wider in the demagnified sample plane coordinates than that predicted by the simulations. We believe that the resolution of the experiment is degraded by aberrations in the CRL lenses.
Aberrated lenses
So far, we have ignored the effect of the aberrations in the lenses. In transmission images (Lyatun et al., 2020) and Bragg images taken without the condenser lens, short-wavelength aberrations are known to cause strong speckle-like noise in the final images. The apparent absence of this noise in DFXM images is surprising at first. However, as previously observed, this noise is averaged out when the imaged field is only partially coherent (Falch et al., 2019;Carlsen et al., 2022). Normally we think of the dynamically scattered X-rays as highly coherent as the Bragg scattering effectively collimates the incident radiation, but this argument does not consider the polychromaticity of the incident radiation.
While CRLs have been shown to be nearly achromatic over the bandwidth of the monochromated beam (Pedersen et al., 2018), Bragg scattering is not: a higher-energy component of the incident beam scatters at a smaller angle and vice versa, as sketched in Fig. 5(d). Since the incident beam has a large vertical divergence (compared with both the energy bandwidth and the Darwin width) set by the aperture of the condenser lens, the integration over energies corresponds to integrating over a small spread of angles of the scattered beam. This integration averages out the high-frequency parts of the aberrations in the vertical direction, leaving features elongated in the vertical direction. A relative energy spread of 1.0  10 À4 corresponds to an angular difference of 1.0  10 À4 rad  tan which gives 8 mm at the lens planecomparable with the average grain size (15 mm) of the O30Hgrade beryllium (Lyatun et al., 2020)
used in our CRL.
This effect is demonstrated in Figs. 9(a), 9(b), where an aberrated lens is constructed by multiplying the wavefront by an aberration function at the position of the first lens and the last lens in the CRL. The aberration functions used here are pure phase objects and are made by randomly placing a number of circles of random size. The amplitude and number of circles are chosen to make the simulated and measured images similar. The partially averaged speckle noise has the appearance of vertical stripes and is qualitatively similar to that observed in the real images [ Fig. 7(a)]. In a typical experiment, we do not acquire sufficient data to uniquely determine the aberrations, but an effective aberration function can be recovered using Fourier ptychography (Carlsen et al., 2022). The qualitative similarity between simulated and measured images confirms that the vertical stripe artifacts seen in DFXM images of highly perfect crystals can be explained by high-frequency errors in the objective lens, which we know to be present.
In Figs. 9(c), 9(d) we investigate the effect of similar aberrations, to those used in the objective lens, in the condenser lens. Once again, averaging over the energy bandwidth significantly reduces the strength of the noise and results in vertical stripes. In this case the stripes are unbroken and can be followed from the top to the bottom of the image, in contrast to the noise observed in the real images and with an aberrated objective lens.
Conclusion
DFXM is based on well known physics and we can predict the images it will produce -if we know the structure of the sample. It is possible to simulate the full FOV of the prototypical DFXM instrument at ID06-HXM at the ESRF.
Comparison of our simulations and experimental findings from a near-perfect single-crystal diamond suggests that most deviations between our simulations and the observed images can be explained by non-ideal behavior of the lenses. This suggests that the performance of DFXM instruments is critically limited by the quality of the objective lens.
In general, we do not have a sufficiently accurate model of the sample structure to do full simulations of the DFXM experiment, and the data collected in a typical experiment are not enough to build a full 3D model of the sample at sufficiently high resolution. Nevertheless, simulations like the ones shown here should prove useful for evaluating possible upgrades of the instrumentation and to qualitatively study the type of contrast observed from different types of defects in near-perfect single crystals, such as isolated dislocations, twin boundaries and point defects.
More deformed crystals are difficult to simulate, both because models of the displacement field in such crystals are not easily obtainable and because the large strain would require impractical, small step sizes to ensure proper sampling of the scattering function. In these samples, dynamical diffraction effects are not thought to be important and the speckle-like noise due to lens aberrations should also be less strong, as the scattering is more diffuse. So a wavefront-based simulation approach is less appropriate in this type of sample. It may however be interesting to investigate the transition from the dynamical patterns from near-perfect crystallites to kinematical scattering from deformed crystals using our new approach. | 8,037.4 | 2022-01-19T00:00:00.000 | [
"Physics"
] |
How Easy is SAT-Based Analysis of a Feature Model?
With feature-model analyses, stakeholders can improve their understanding of complex configuration spaces. Computationally, these analyses are typically reduced to solving satisfiability problems. While this has been found to perform reasonably well on many models, estimating the efficiency of a given analysis on a given model is still difficult. We argue that such estimates are necessary due to the heterogeneity of feature models. We discuss inherently influential factors and suggest potential algorithmic solutions.
INTRODUCTION
Feature models [7,10,21] describe the user-visible characteristics, known as features, of software product lines (SPLs) [6,45].A configuration of features is valid when it fulfills all feature dependencies, known as constraints [6].The valid configurations of a feature model usually form large configuration spaces [21,50], which quickly become difficult to comprehend.Thus, automated feature-model analyses have been proposed [2,8,9,36,46,60], with which stakeholders can improve their understanding about a feature model (e.g., to spot modeling errors or guide business decisions).Furthermore, feature-model analyses enable more advanced SPL analyses, which support several activities in the software development life cycle (e.g., design [8,52], implementation [32,56], testing [29,35], and economical estimates [15]).
To implement such analyses, feature models are often represented as propositional formulas [6,7,39,47], which are then passed to off-the-shelf analysis tools, such as satisfiability (SAT) solvers [36,38].This approach is usually tractable in practice, although SAT is NP-complete.In two well-known publications, this phenomenon has been empirically investigated on large collections of feature models; concluding that "SAT-based analysis of (large real-world) feature models is easy" [31,36].
While many studies and experiences confirm this overall sentiment, feature models are also known to be heterogeneous (e.g., in terms of origin, domain, and size) [5,50].Indeed, there are several large feature models (e.g., the Linux kernel, Freetz-NG, or Automotive02) that still challenge state-of-the-art analysis techniques [28,42,44,50,51,55].This is because not all analyses are equally tractable: For example, while a single call to a SAT solver is usually cheap to compute, some analyses are difficult or impossible to phrase in terms of a single SAT call [51].Instead, they either require several SAT calls (e.g., reasoning about edits [57], core/dead features [12,20], type-checking [22,23]), specialized solvers (e.g., #SAT [50] or AllSAT [18]), or algebraic reasoning (e.g., slicing [1,26] or differencing [2]).Thus, a more conservative interpretation of previous results might be: "many SAT-based analyses on most (large real-world) feature models are comparably easy".
However, this naturally begs several questions: Which analyses are easy on which feature models?What does easy mean for feature-model analysis?What factors influence the answers to these questions?We argue that it is time to pivot from a class-based point of view on feature-model complexity to an instance-based perspective.That is, instead of making sweeping statements about the entire class of feature models, it may be more illuminating to try to estimate the difficulty of computing a given analysis for a given feature model (potentially taking into account other influential factors).With this shift in perspective, we aim to foster a more nuanced discussion of feature-model complexity in the SPL community, which takes the heterogeneity of feature models into account.Both perspectives are cases of feature-model meta-analysis, which provides a general framework for talking about properties of feature-model analyses.In the following, we outline our idea of meta-analysis, the shift in perspective we propose, and initial suggestions for working towards instance-based meta-analysis.
FEATURE-MODEL META-ANALYSIS
We define feature-model meta-analysis as the practice of asking (and answering) questions about feature-model analyses as follows: • First, one must ask a question about a (non-)functional property of feature-model analysis.The actual analysis results are not of interest here, instead one asks for correctness or efficiency (e.g., regarding runtime, memory usage, or energy consumption).The question fixes some factors (e.g., feature model and analysis), while leaving others blank (e.g., algorithm and solver).• Second, one must define criteria to answer the question (either exactly or an estimate) and propose an algorithm to do so.
Here, we discuss two opposed kinds of meta-analysis: classand instance-based meta-analyses.While this distinction is not clear-cut, it serves well for demonstrating the shift in perspective we propose.
Class-Based Meta-Analysis
Class-based meta-analyses ask questions about a whole class of feature models and/or analyses.Thus, they can illuminate the feasibility of computing certain analyses on certain models.In previous work, several questions of this kind have been asked (and answered).
"Is SAT-based analysis of feature models easy?"This is an open meta-analysis question that binds few factors and can only be answered with "yes" or "no".Mendonça et al. [36] actually pose and answer a more specific variant of this question: They focus on artificial feature models and, although acknowledging repeated SAT calls, they perform only one SAT call.They conclude that analysis is indeed "easy" because they find no phase transition."Is SAT-based analysis of large real-world feature models easy?" Analogously, Liang et al. study the feasibility of singular SAT calls on feature models of several open-source SPLs, which are specified using the KConfig language [31].They give new insights as to why determining feature-model satisfiability is comparably easy-still, they do not consider complex feature-model analyses or distinguish difficulty on a per-instance basis.
Instance-Based Meta-Analysis
The above meta-analyses have likely been helpful in establishing the widespread use of SAT solvers for feature-model analyses.However, they neither acknowledge the vast gap between feature models that are computationally "simple" (e.g., the graph product line [33]) or "complex" (e.g., the Linux kernel [42,55]), nor do they distinguish how this computational complexity may depend on the computed analysis (or other, more subtle factors).We discuss briefly how to both ask and answer instance-based meta-analysis questions.
Asking Meta-Analysis Questions To acknowledge the differences in complexity between feature models, we can ask more precise questions about analysis tasks, such as: "How much time does analysis X need on feature model Y when using solver Z?" or "Which algorithm is most memory-efficient for computing X on Y?" These questions still leave room for filling in details (e.g., system specifications), so they can only be estimated-but they will yield more useful answers for a given use case than the more sweeping statements obtained with class-based meta-analyses.The appropriate level of parametrization depends on the use case and represents a trade-off: Binding more factors allows for more accurate estimates (improving internal validity); binding less factors allows for more general settings (improving external validity) [49].As a starting point for posing interesting questions, we list several factors that we know or suspect to influence the correctness or efficiency of feature-model analyses: • Feature Model [50]: origin [5], domain, size, and expressiveness of constraints [24,54] • Propositional Encoding [6,7,47]: extractor (e.g., for KConfig specifications [16,17,42]), non-Boolean variability [11,40,43], CNF transformation [28,34], and preprocessing • Analysis: class (consistency, cardinality, enumeration, or algebraic) [51], the question it answers [8,52], the chosen algorithm [12,20], and its implementation • Solver (if needed): class (e.g., SAT [36], #SAT [50], AllSAT [18], or VSAT [59]), solver parametrization (e.g., exact or approximate, optional preprocessing steps), name/version • Knowledge Compilation (if needed): class (e.g., BDD [19,37,55] or d-DNNF [53]), name/version • Prior Information (if given): existing analysis results, revisions (incremental analysis [25]), and interfaces [48] • Execution Environment: CPU, RAM, and deep variability [30] It is one purpose of feature-model meta-analysis to study the influence of these (and other) factors and how they interact.To do so, we must find techniques to answer meta-analysis questions.
Answering Meta-Analysis Questions Ideally, we want to answer instance-based meta-analysis questions without actually computing the analysis in question, which can be costly or even infeasible.Instead, one usually tries to investigate surrogate metrics (e.g., on an ordinal or interval scale) to estimate analysis complexity.For example, we can characterize feature models using metrics: • Syntactic Metrics [50]: number of features, variables, constraints, clauses, literals; constraint size, density [31] • Semantic Metrics [3,4]: phase transition [36], community structure [41], self-similarity While syntactic metrics are easy to compute, they seem to allow for rough estimates at most [19,50].Semantic metrics are probably better indicators for inherent complexity of a feature model, but are themselves usually NP-hard and could therefore be approximated.
Once we have determined suitable metrics for studying a metaanalysis question, we must also choose an algorithm to answer it.To this end, previous work uses simple criteria (e.g., "yes/no" for a phase transition [36]) or otherwise handcrafted models and hypotheses (e.g., the number of features correlates with analysis time [50]).Alternatively, machine learning techniques might be applicable, but we are not aware of any studies in this direction.
CONCLUSION
By pivoting from class-to instance-based meta-analysis, many directions for discussions and future work open up: What metaanalysis questions are worth asking, what factors are relevant, and how do they interact?Is feature-model complexity intrinsic, regardless of the chosen analysis or solving technique?When do knowledge compilation and incremental analysis pay off?By improving our ability to answer instance-based meta-analysis questions, we lay a foundation for implementing meta-analyzers that (semi-)automatically choose the best (e.g., fastest) analysis plan (i.e., algorithm, solver, . . . ) for a given analysis task, analogous to what portfolio solvers do for SAT [58].Thus, analysis plans render analyses into first-class objects, which we can precisely describe, manipulate, and optimize; as has been done for databases [14] and, to some degree, also been proposed for SPL analyses [13,27]. | 2,118.2 | 2024-02-07T00:00:00.000 | [
"Computer Science"
] |
Multimodal neuroimaging computing: the workflows, methods, and platforms
The last two decades have witnessed the explosive growth in the development and use of noninvasive neuroimaging technologies that advance the research on human brain under normal and pathological conditions. Multimodal neuroimaging has become a major driver of current neuroimaging research due to the recognition of the clinical benefits of multimodal data, and the better access to hybrid devices. Multimodal neuroimaging computing is very challenging, and requires sophisticated computing to address the variations in spatiotemporal resolution and merge the biophysical/biochemical information. We review the current workflows and methods for multimodal neuroimaging computing, and also demonstrate how to conduct research using the established neuroimaging computing packages and platforms.
Introduction
Neuroimaging has profoundly advanced neuroscience research and clinical care rapidly in the past two decades, prominently by magnetic resonance imaging (MRI), complemented positron emission tomography (PET), and electroencephalography (EEG)/magnetoencephalography (MEG). The art of neuroimaging today is shaped by three concurrent, interlinked technological developments [1]: Data Acquisition The advances of imaging instrumentation have enabled digital image acquisition, as well as electronic data storage and communication systems, such as the picture archiving and communication system (PACS). These imaging systems, CT, MRI and PET showed obvious clinical benefits by providing high contrast tissue differentiation. The previous film-based reading was replaced by the electronic displays (axial, coronal and sagittal planes of the volume) without losing diagnostic quality.
Medical Image Computing The growth of neuroimaging has spurred a parallel development of neuroimaging computing methods and workflows, including bias correction, registration, segmentation, information extraction and visualization. We should note the difference between neuroimaging and neuroimaging computing. Neuroimaging focuses on the image acquisition, capturing the snapshot of the brain; whereas neuroimaging computing focuses on the computational analysis of the brain images, extracting and enhancing the information of relevance to best describe the brain anatomy and function.
Package and Platform Development To fit into research and clinical timelines and facilitate translational medicine, the neuroimaging computing methods and workflows are often integrated into software packages. Many such packages were added to imaging systems by the major vendors of medical imaging equipment and many specialized companies. However, a greater number of neuroimaging computing packages and platforms are free and opensource, designed and supported by the medical imaging research groups and communities.
Multimodal neuroimaging, i.e., the simultaneous imaging measurement (EEG/fMRI [2], PET/CT [3]) or summation of separate measurement (PET and sMRI [4], sMRI and dMRI [5], fMRI and dMRI [6]), has become an emerging research area due to better access to imaging devices, especially the hybrid systems, such as PET/CT [7,8] and PET/MR [9]. The recent advances in neuroimaging computing methods also enabled joint analysis of the multimodal data. The free and open-source software (FOSS) packages and platforms for neuroimaging computing further facilitate the translation of the multimodal neuroimaging research from the lab to better clinical care.
Multimodal neuroimaging computing is a very challenging task due to large inter-modality variations in spatiotemporal resolution, and biophysical/biochemical mechanism. Compared to single imaging modality computing, it requires more sophisticated bias correction, coregistration, segmentation, feature extraction, pattern analysis, and visualization. Various methods for neuroimaging analysis have been proposed, and many have been integrated into the task-oriented packages or integrated platforms.
In this paper, we review the state-of-the-art methods and workflows for both modality-specific neuroimaging computing and multimodal neuroimaging computing, and demonstrate how to conduct multimodal neuroimaging research using the established packages and platforms. Fig. 1 provides an overview of the current status and illustrates the major components of neuroimaging computing, including neuroimaging modalities, modality-specific computing workflows (a series of tasks), multimodal computing methods, algorithms, packages, platforms and communities. MRI, PET, EEG/MEG and their computing workflows and methods are discussed in this review. A neuroimaging computing task in an analysis workflow may be fulfilled by multiple algorithms, and the most widely used algorithms, e.g., voxel-based morphometry (VBM) [49], are often integrated into software packages, e.g., Statistical Parametric Mapping (SPM) 1 , FMRIB Software Library (FSL) 2 , and Neurostat 3 . The new imaging tasks also demand the refinement of existing algorithms and development of new algorithms. Similar algorithms are often developed independently in different labs, sometimes with little awareness of existing packages/platforms. This paper is organized as follows. In Sect. 2, we elaborate the computing workflows, which consist of a number of specific tasks, for individual modalities. In Sect. 3, we review the major multimodal neuroimaging computing methods, i.e., registration, segmentation, feature integration, pattern analysis and visualization. In Sect. 4, we introduce the task-oriented packages and platforms for the tasks mentioned in previous sections. We focus on the free and open source software (FOSS) in this review, since they could help to better realize the quickly evolved methods and workflows than their commercial counterparts, and thus accelerate translational medicine. For the sake of clarity and precision, the algorithms, packages and platforms are not described in detail, but we refer the interested readers to more specific papers instead. In Sect. 5, we give one example of brain tumor surgical planning using the established packages and platforms. Lastly, we outline the future directions of multimodal computing in Sect. 6. Bias and artifacts in neuroimaging signals may result from imaging systems, environment, and body motion. Many biases and artifacts are induced by the imaging systems, e.g. inhomogeneous radio frequency (RF) coils in MRI, contrast agents in PET/CT, broken or saturated sensors in EEG/MEG system. Environment-related artifacts, arising from generators of magnetic fields outside the human body such as magnetic noise from power lines and other environmental noise sources, such as elevators, air conditioners, nearby traffic, mechanical vibrations transmitted to shielded room, bed vibration, and pulsation [51]. Motion-related artifacts are caused by eye movements, head movements, cardiac and muscular activity, and respiratory motion. The motion of magnetic implements, such as pacemakers and implantable defibrillators [52] may also give rise to artifacts, and may cause danger to patients in strong magnetic field, although there are new MRI compatible pacemakers/defibrillators that have been introduced [53].
The bias and artifacts in MRI are mainly system-related, e.g., RF inhomogeneity causing slice and volume intensity inconsistency. The nonparametric nonuniformity normalization (N3) algorithm and its variant based on Insight Toolkit [54, 55] (N4ITK) [56] are the de facto standard in this area. The acquisition protocols for dMRI are inherently complex, which require fast gradient switching in Echo-Planar Imaging (EPI) and longer scanning time. dMRI is prone to many other types of artifacts, such as eddy current, motion artifacts and gradient-wise inconsistencies [57]. Tortoise [58] and FSL diffusion toolbox (FDT) [59] are popular choices for eddy current correction and motion correction in dMRI data, and the recently proposed DTI-Prep [60] offers a thorough solution for all known data quality problems of dMRI. Motion is a serious issue in fMRI, and may lead to voxel displacements in serial fMRI volumes and between slices. Therefore, serial realignment and slice timing correction is required to eliminate the effects of head motion during the scanning session. Linear transformation is usually sufficient for serial alignment, whereas a non-linear auto-regression model is often used for slice timing correction [61]. These two types of correction are commonly performed using SPM and FSL. Dedicated PET scanners have been replaced by the hybrid PET/CT systems [62]. The most commonly seen artifacts on PET/CT are mismatches between CT and PET images caused by body motion due to the long acquisition time of the scan. Metallic implants and contrast agents may also give rise to artifacts on PET/CT, usually leading to overestimation of PET attenuation coefficients and false-positive findings [63]. Knowledge and experience are needed to minimize these artifacts and, in that way, produce betterquality PET/CT images. EEG and MEG signals are often contaminated by all of the three types of artifacts, such as the system-related superconducting quantum interference device (SQUID) jumps, and the noise from the environment or body motion [51]. Visual checks and manual removal are usually required to exclude the artifacts. Another strategy uses signal-processing methods to reduce artifacts while preserving the signal. Linear transformation, e.g., principal component analysis (PCA) and independent component analysis (ICA) [64,65], and regression, e.g., signal space projection (SSP) and signal space separation (SSS) [66,67], are frequently applied to the raw EEG/ MEG data.
Structural MRI computing
The sMRI computing workflows usually involve skull striping, tissue and region of interest (ROI) segmentation, surface reconstruction [68], and can include brain morphometry analysis, such as the voxel-based morphometry (VBM)/tensor-based morphometry (TBM)/deformationbased morphometry (DBM) [49], and surface-based morphometry (SBM) [69] by comparing one group of subjects to another or tracking the changes over a sequence of observations for the same subject. FreeSurfer [70] is a wellestablished tool for brain tissue segmentation and surface reconstruction. When registered into a standard brain space, e.g., the Talariach coordinates [71] and MNI coordinates [72], and labeled with different regions of interest (ROIs) using brain templates, e.g., ICBM template [73] and the AAL template [74], the sMRI datasets can further be analyzed at the ROI level. Various techniques have been investigated to quantitatively analyze the morphological changes in cortex, e.g., grey matter density [49], cortical folding [75], curvedness and shape index [76,77], cortical thickness [69], and surface area [78,79], local gyrification index (LGI) [75], and many other shape [78,80] or texture features [81][82][83]. Mangin et al. [84] provided an extensive review on the popular morphological features, and Winkler et al. [85] demonstrated how to use these features in imaging genetics.
Diffusion MRI computing
The dMRI computing workflow consists of four major steps. The first step is to estimate the principle directions of the tensor or the orientation distribution function (ODF) in each voxel, which are used to quantitatively analyze the local white matter morphometry and probe the white matter fiber tracts in the following steps. Advanced fiber orientation estimation methods include the ball and stick mixture models [59], the constrained spherical deconvolution (CSD) [86], the q-ball imaging (QBI) [87], diffusion spectral imaging (DSI) [88], the generalized q-sampling imaging (GQI) [89], and the QBI with Funk Radon and Cosine Transform (FRACT) [90]. Wilkins et al. have provided a detailed comparison of these models [91]. In the second step, various parametric maps based on the tensors/ ODFs, i.e., fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity (AXD) maps [92], reveal the focal morphometry of the white matter [93]. The third step is to apply the fiber tracking algorithms [94] to construct 3D models of the white matter tracts, referred to as tractography. Tractography further enables the quantitative analysis of fiber tract morphometry i.e., orientation and dispersion [95], and the analysis of connectome, i.e., connectivity networks of populations of neurons [96]. Brain parcellation and fiber clustering are two major approaches that can separate the neurons into different groups/ROIs, and construct the connectome [97]. Jones et al. [98] recently provided a set of guidelines which define the good practice in dMRI computing.
Functional MRI computing
After bias and artifacts correction in fMRI, a mean image of the series, or a co-registered anatomical image, e.g., sMRI, is used to estimate some registration coefficients that map it onto a template, followed by spatial smoothing and parameter estimation. Friston [99] gave an introduction to these procedures. When the brain is performing a task, cerebral blood flow (CBF) usually changes as neurons work to complete the task. The primary use of task-evoked fMRI is to identify the correlation between brain activation pattern and brain functions, such as perception, language, memory, emotion and thought [100,101]. Many models and methods have been suggested to detect patterns of brain activation, and some of them have been integrated into the software packages, such as the general linear model (GLM) in the SPM and FSL packages, and independent component analysis (ICA)/canonical correlation analysis (CCA) in AFNI package 4 . When brain is at resting state, fMRI is used to detect the spontaneous activation pattern in the absence of an explicit task or stimuli [102]. Resting-state fMRI enables us to deduce the functional connectivity between dispersed brain regions, which form functional brain networks, or resting state networks (RSNs). The Default Mode Network (DMN) is a functional network of several brain regions that show increased activity at rest and decreased activity when performing a task [103]. DMN has been widely used as a measure to compare individual differences in behavior, genetics and neuropathologies, although its use as a biomarker is controversial [104,105]. Rubinov [106] provided a review of the connectivity measures.
EEG and MEG computing
In EEG and MEG there are usually four components after removing the artifacts or unwanted data components that contaminate the signals. The analysis of event-related potentials (ERP) in EEG or event-related fields (ERF) in MEG aims to analyze brain responses that are evoked by a stimulus or an action, followed by spectral analysis, which transforms the signals into time-frequency domain. The aim of source reconstruction is to localize the neural sources underlying the signals measured at the sensor level. MRI is usually used to provide anatomical reference for source reconstruction. The aim of connectome analysis is to investigate the causality of brain activities and connectivity of brain networks by exploring information flow and interaction between brain regions. Gross et al. provided basic guidelines on EEG and MEG in research [51]. MNE 5 , EEGLAB 6 and eConnectome 7 are the most widely used software packages specifically designed for EEG and MEG computing.
Registration
Registration is the most commonly used technique in a neuroimaging study, and it finds the spatial relationship between two or more images, e.g., multimodal neuroimaging data alignment, serial alignment, and atlas mapping. A registration method can be defined in five aspects, i.e., a cost function for evaluating the similarity between images, a transformation model to determine the degree-of-freedom (DOF) of the deformation, an optimization method for minimizing the cost function, a sampling and interpolation strategy for computation of the cost function, and a multi-resolution scheme for controlling the coarseness of the deformation [119].
Registration methods can be roughly classified into three categories according to the DOF of their transformation models. Rigid registration has a DOF of 6 and allows for global translations and rotations. Affine registration, i.e., linear registration, allows for translation, rotation, scaling and skew of the images. Rigid and affine registration methods are usually sufficient for registering the multimodal datasets of same subject. However, deformable registration, which supports local deformations, is frequently needed to register images with large differences, e.g., registering an image to a template, or registering preand post-contrast images of the same subject. Deformable registration always requires rigid or affine registration to obtain a rough initial alignment. In many multimodal studies, a combination of these registration methods were used. For example, we recently jointly analyzed the ADNI FDG-PET and T1-weighted MRI datasets to classify AD and mild cognitive impairment (MCI) patients [79]. The PET images were aligned to MRI using an affine registration method (FSL FLIRT) [120]. The MRI datasets were registered to the MNI template using a deformable method (IRTK) [121], and the output registration coefficients by IRTK were applied to register the PET images to the same template. There are many other widely used registration algorithms, such as B-Spline registration [119,122], Demons [123], and SyN [124], and ITK [54] registration framework is a standard-bearer for all of these popular registration methods.
Segmentation
Segmentation is also referred to as brain parcellation or labeling. The brain can be segmented at different levels, i.e., tissues (grey matter, white matter, cerebrospinal fluid), cortical regions, and sub-cortical regions. The segmentation methods can be classified into three categories [125]. The first category is manual and semi-automatic methods, which require manually outlining the brain regions according to a protocol [126,127] or labeling the landmarks or seed points [128,129]. These methods are labourintensive and prone to intra-and inter-operator variation.
The second category is the atlas inverse mapping methods, which can inversely map a labeled atlas, e.g., the standard ICBM and AAL template, or user-defined image, to the original image space. Yao et al. recently provided a review of popular brain atlases [130]. Atlas inverse mapping is simple, but its performance heavily depends on the selected atlas and mapping method.
A more robust but complex solution is multi-atlas labeling, including the multi-atlas propagation with enhanced registration (MAPER) [131] and its variants [132,133]. These methods carry out whole-brain segmentation in the original image space by fusing multiple labeling results derived from the multiple atlases. Multiatlas labeling is computationally expensive, but the performance is comparable to manual labeling [125]. FSL FAST 8 and NifSeg 9 are widely used for brain tissue segmentation. IRTK, Advanced Normalization Tools (ANTs) 10 and NifReg 11 are commonly used in multi-atlas labeling as the normalization tools.
Feature fusion
Various features can be extracted from the neuroimaging data, as described in Sect. 2. Feature fusion is needed to jointly analyze the features from multimodal data. A straightforward solution is to concatenate input multi-view features into a high-dimensional vector, and then apply feature selection methods, such as t-test [134], ANOVA [118], Elastic Net [10,135], lasso [136] or a combination of these methods [137,138], to reduce the 'curse of dimensionality'.
These methods show promising results. However, the inter-subject variations cannot be eliminated using the concatenation methods because the inter-subject distances measured by different features may have different scales and variations. With a focus on the subjects, the feature embedding methods, such as multi-view spectral embedding (MSE) [139] and multi-view local linear embedding (MLLE) [140], have been used to explore the geometric structures of local patches in multiple feature spaces and align the local patches in a unified feature space with maximum preservation of the geometric relationships.
In addition, machine learning, especially deep learning, is increasingly used to extract high-level features from neuroimaging data. The advantage of learning-based features is they do not depend on prior knowledge of the disorder or imaging characteristics as the hand-engineered features. They are also essentially suitable for multimodal feature learning, and could expect better performance with larger datasets. However, learning-based features heavily depend on the training datasets [141]. Recently, Suk et al. [142] proposed a feature representation learning framework for multimodal neuroimaging data. One stacked auto-encoder (SAE) was trained for each modality, then the learnt high-level features were further fused with a multi-kernel support vector machine (MK-SVM). They further proposed another deep learning framework based on the deep Boltzmann machine (DBM) and trained it using the 3D patches extracted from the multimodal data [143].
Pattern analysis
Pattern analysis aims to deduce the patterns of disease pathologies, sensorimotor or cognitive functions in the brain and identify the associated regionally specific effects. A substantial proportion of pattern analysis methods focused on classification of different groups of subjects, e.g., distinguishing AD patients from normal controls [10,138]. Hinrichs et al. [144,145] and Zhang et al. [4] recently proposed the multi-kernel support vector machine (MK-SVM) algorithm, which is based on multi-kernel learning and extends the kernel tricks in SVM to the multiple feature spaces. We previously proposed a multifold Bayesian kernelization (MBK) model [79] to transfer the features into diagnosis probability distribution functions (PDFs), and then merge the PDFs instead of the feature spaces.
Regression-based pattern analysis is often used to identify the biomarkers of a group of subjects and probe the boundaries between different groups. The multimodal biomarkers can be based on the voxel features, ROI features and other features, as described in Sect. 2. Regression, such as Softmax regression [10], Elastic Net [135], and lasso [136] can be combined with feature learning in a unified framework.
Recently, the pattern analysis methods have been extended to simulation of future brain development based on the previous states of the brain and comparison to other brains. The basic assumption is that brains with similar cross-sectional and longitudinal deformations would have similar follow-up development [146,147]. When the population is sufficiently large to include a majority of neurodegenerative changes, the simulated results are more accurate.
Visualization
The neuroimaging data are mainly 2D and 3D, thus can be visualized in multi-dimensional spaces with 2D and 3D viewers. Multimodal data in 2D space are usually displayed with three layers, including background, foreground and label maps. The 3D viewer enables visualization of volume data, such as volume renderings, triangulated surface models and fiber tracts. Basic image visualization functions, such as look up tables, zoom, window / level, pan, multi-planar reformat, crosshairs, and synchronous pan / scroll for linked viewers, have been implemented in most visualization platforms, such as Slicer 12 and BioImage Suite 13 . These platforms also can accommodate visualization of high-dimensional data, e.g., tensors and vector fields.
Image markup refers to the graphical elements overlay, such as fiducials (points), rulers, bounding boxes, and label maps. Image annotation refers to the text-based information [148]. Both image markups and annotations are used to describe the meta information of the images, and annotations can be associated with markup elements as free text.
Another important use of the image markups is interactive visualization. The aforementioned platforms also provide a graphical user interface to interact with the data. For example, the volume rendering module of Slicer allows the users to define a bounding box and visualize the content in the bounding box only. Another module, tractography interactive seeding, is designed for interactive seeding of DTI fiber tracts passing through a list of fiducials or vertices of a 3D model. Slicer also allows the configuration of the layouts and manipulation of content in the viewers to suit a specific use case.
Task-oriented packages
The FOSS packages for neuroimaging computing are usually initially designed for a single task, such as registration and segmentation, and some of them then are extended to related tasks and become multifunctional packages. A number of the most widely used FOSS packages are listed in Fig. 1-packages and platforms.
Popular multifunctional packages include FreeSurfer, FSL, SPM, ANTs and NifTK. They cover similar aspects of functionality, but all have particular strengths. Free-Surfer and FSL provide a comprehensive solution of analysis tools for fMRI, sMRI and dMRI data. SPM is designed for the analysis of fMRI, PET, SPECT, EEG and MEG. The recently developed ANTs and NifTK are useful for managing, interpreting and visualizing multimodal data, and represent the state-of-the-art in medical image registration and segmentation. Tustison et al. [149] recently compared ANTs and FreeSurfer in a large-scale evaluation of cortical thickness measurements. Other packages may focus on a specific task or a set of related tasks. IRTK 14 , BRAINs [150], BrainVisa 15 , ITK-SNAP 16 and MindBoggle 17 are popular choices for registration and segmentation. In dMRI analysis, Camino 18 , DTI-TK 19 , DSI Studio 20 , TrackVis 21 and MRTrix 22 are most widely used packages. Soares et al. [151] recently conducted a thorough evaluation of these packages and the other dMRI computing packages in published studies. In functional neuroimaging computing, AFNI, PyMVPA 23 and REST 24 are widely used for fMRI analysis, whereas MNE, EEGLAB, eConnectome for EEG/MEG analysis.
All integrated platforms
For clinical applications, the medical image computing and visualization functions are part of the operation system and must meet the same standards of reliability, robustness, and simplicity of operation as the core imaging equipment. This is usually accomplished using software platforms added onto imaging system by the major vendors of medical image equipment and many specialized companies. Examples include Advantage Windows (General Electric), Syngo Via (Siemens), Vital Image Vitrea (Toshiba), Visage Amira (Visage Imaging), PMOD (PMOD Technologies Ltd.), Definiens (Definiens Inc.), and MimVista (MIM Software Inc.). These packages provide users with a set of analysis tools, compatibility with PACS and customer support. Such clinically oriented systems are not always affordable for academic researchers. Commercial solutions are typically not extensible by the end user, nor oriented towards prototyping of new tools, and may require specialized hardware, thereby limiting their applicability in projects that involve the development of new image computing methods.
As opposed to the commercial platforms, FOSS platforms are meant to provide a research platform that is freely available and does not require specialized hardware. A key step in the evolution of today's flexible and sophisticated capabilities for image-data-based research medicine was the creation of the 3D Slicer, which is based on a modular architecture [1,152]. 3D Slicer has become a successful and long-lived platform for the effective use of volumetric images in clinical research and procedure development. There are a number of platforms which aim to cover similar aspects of functionality, e.g., BioImage Suite, BrainSuite 25 , MIPAV 26 and MITK 27 .
Some of the libraries contributing to the foundation of Slicer were designed in close collaboration and often share the same developer community. These libraries, including CMake, ITK, VTK and CTK, are distributed as part of the National Alliance for Medical Image Computing (NA-MIC) Kit [153], which are actively supported by the NA-MIC research community 28 . Many popular packages, e.g., ANTs, MindBoggle, ITK-SNAP, DTIPrep, and MITK are also based on the NA-MIC Kit. NIPY 29 and NeuroDebian 30 are another two major research communities for neuroimaging research and platform development. To promote open science, neuroimaging tools and resources are always shared to other community members, usually through the INCF 31 and NITRC 32 forums.
Example: surgical planning for brain tumor resection
Tractography derived from dMRI has great potential to help neurosurgeons determine tumor resection boundaries in functional areas involving eloquent white matter fibers. The MICCAI DTI Challenge 33 is dedicated to comparing different fiber-tracking algorithms in reconstruction of white matter tracts, such as peritumoral tracts and cerebrospinal tract (CST). In this section, we present an example of pre-operative planning for brain tumor resection using the sMRI and dMRI data. The original data consist of a DWI volume and two structure scans of a patient with meningioma. The DWI scan was acquired with a spin-echo EPI sequence with the following parameters: voxel size 2.2 Â 2.2 Â 2.2 mm, FOV 220 mm, 58 slices, b-value 1000 s/mm 2 , 30 diffusion-weighted volumes and 1 baseline volume. The T1 original was acquired using a Ax 3D T1 MPRAGE sequence. The T2 original was acquired using a Ax 3D SPACE sequence.
The original data were computed in four steps, as illustrated in Fig. 2. For dMRI-specific computing, the tensors were estimated using a weighted least square (WLS) algorithm, and the output is a DTI volume. We then registered the T1 and T2 MRI volumes to baseline volume using the affine registration algorithm. The registered T1 and T2 volumes were used as the anatomical references. For sMRI-specific computing, tumor, ventricle and motor cortex were manually seeded and semi-automatically labeled in the baseline volume. The label map of the tumor and ventricle were than used to generate the 3D surface model using the Model Maker module in Slicer. The head surface, pial surface and white matter surfaces for both hemisphere were reconstructed using the Morphologist 2013 pipeline in BrainVisa [68]. For multimodal computing, the white matter tracts were visualized using the Slicer Tractography Interactive Seeding module, which allows users to mark the image with fiducials, and then move it around the tumor to visualize the peritumoral fiber tracts.
Future directions
The neuroimaging techniques will keep advancing rapidly, towards higher spatial/temporal/angular resolutions, shorter scanning time, and greater image contrast. In particular, the advances in the hybrid imaging scanners, e.g., PET/CT and PET/MRI, will enter more clinics and laboratories, lowering the cost for data acquisition and enabling more interesting discoveries in a greater multitude of populations and disorders. The continued growth in the complexity and dimensionality of neuroimaging data will spur the parallel advances of computational models and methods to accommodate such complex data. Such models and methods need to keep increasing the grade of automation, accuracy, reproducibility and robustness, and eventually need to be integrated into the clinical workflows to facilitate clinical testing of the new neuroimaging biomarkers.
The multidisciplinary nature of neuroimaging computing will keep bringing together clinicians, biologists, computer scientists, engineers, physicists, and other researchers who are contributing to, and need to keep abreast of, advances in the neurotechnologies and applications. New methods and models will be developed by the collaboration of different groups or individuals, with rapid iterations. Therefore, future packages and platforms need to respond more quickly to the updates, without compromising the functionality, extensibility and portability. This might cause difficulties in the maintenance of large packages and platforms, but will encourage the researchers to provide smarter solutions, e.g., providing an online version to make the whole process of developing, sharing and updating much quicker for both developers and users. | 6,610.8 | 2015-01-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
M-theory on non-Kähler eight-manifolds
We show that M-theory admits a class of supersymmetric eight-dimensional compactification background solutions, equipped with an internal complex pure spinor, more general than the Calabi-Yau one. Building-up on this result, we obtain a a particular class of supersymmetric M-theory eight-dimensional non-geometric compactification backgrounds with external three-dimensional Minkowski space-time, proving that the global space of the non-geometric compactification is again a differentiable manifold, although with very different geometric and topological properties respect to the corresponding standard M-theory compactification background: it is a compact complex manifold admitting a Kähler covering with deck transformations acting by holomorphic homotheties with respect to the Kähler metric. We show that this class of non-geometric compactifications evade the Maldacena-Nuñez no-go theorem by means of a mechanism originally developed by Mario García-Fernández and the author for Heterotic Supergravity, and thus do not require lP-corrections to allow for a nontrivial warp factor or four-form flux. We obtain an explicit compactification background on a complex Hopf four-fold that solves all the equations of motion of the theory, including the warp factor equation of motion. We also show that this class of non-geometric compactifications are equipped with a holomorphic principal torus fibration over a projective Kähler base as well as a codimension-one foliation with nearly-parallel G2-leaves, making thus contact with the work of M. Babalic and C. Lazaroiu on the foliation structure of the most general M-theory supersymmetric compactifications.
Introduction and summary of results
Supersymmetry has been linked in many different and profound ways to geometry since its discovery in the seventies, see for example [1][2][3][4][5] for more information and further references. In particular, supersymmetric solutions to Supergravity theories are closely linked to spinorial geometry, since they consist of manifolds equipped with spinors constant respect to a particular connection, whose specific form depends on the Supergravity theory under consideration [6,7]. The global existence of spinors and the other Supergravity fields usually constrains the global geometry of the manifold. However, the final resolution of the Supergravity equations of motion usually resorts to the use of adapted coordinates to the problem at a local patch of the manifold. Once we have solved the Supergravity equations of motion, a really hard problem by itself, we have to face another difficulty: in order to fully understand the solution, we need to extract as much information as possible about the global geometry of the manifold just from the existence of some explicit tensors and spinors, which we only know at a local patch. In other words, we want to know which manifolds are compatible with a particular set of tensors and spinors whose form is only known locally.
In fact, this is not a new problem in Theoretical Physics or Differential Geometry. It was already encountered soon after the discovery of General Relativity. Solving General Relativity's equations of motion 1 usually means solving the metric at a local patch of a JHEP09(2015)178 manifold which is not known a priori. In order to find which would be the physically meaningful manifold compatible with a locally defined metric, physicists back then created a procedure, by now textbook material [8], to obtain the maximally analytic extension of a given local patch endowed with a locally defined metric. In doing so for a simple solution, namely the Schwarzschild black-hole, one can find for example that the corresponding manifold can indeed be covered by a single system of coordinates and it is thus homeomorphic to an open set in R 4 . This procedure has been carried out in other popular solutions of General Relativity, for example the Reissner-Nordström and the Kerr black holes, which are relatively simple solutions compared to the kind of solutions that one obtains in Supergravity, where finding the maximally analytic extension associated to a local solution is more difficult due to their complexity.
Still, for supersymmetric solutions of Supergravity some information about the global geometry of the manifold can be obtained simply from the analysis of the existence of constant spinors: for example it may be possible to show that the manifold is equipped with various geometric structures, like Killing vectors or complex, Kähler, Hyperkähler, Quaternionic... appropriately defined structures. This already constrains the problem to a relatively specific class of manifolds. However, in performing such analysis sometimes there are involved various kinds of subtle choices, which, if modified, would yield a different global solution, a different manifold which however is locally indistinguishable from the unmodified one, since they exactly carry the same structure at the local level.
The first thing we are going to do in this note is to precisely modify one condition that had been implicitly assumed so far in String-Theory warped compactifications [9]: we are going to consider that the warp-factor is not a globally defined function on the compact manifold, but only, given a good open covering, locally defined on each open set. In order to do this consistently we will keep in mind that the physical fields of the theory must remain as well-defined tensors on the manifold, as it is required from physical considerations. The warp factor will turn out to be globally described as a section of an appropriate line bundle.
We are going to apply the previous modification to M-theory compactifications to three-dimensional Minkowski space-time preserving N = 2 supersymmetries. M-theory compactifications to three dimensions preserving different amounts of supersymmetry have been extensively studied in the literature [10][11][12][13][14][15][16][17]. In references [16,17] a very rigorous and complete study of the geometry of the internal eight-dimensional manifold has been carried out using the theory of codimension-one foliations, which turns out to be the right mathematical tool to characterize it, as suggested in [18].
Coming back to the case of compactifications to three-dimensional Minkowski spacetime preserving N = 2 supersymmetries, the analysis of the seminal reference [10] concludes that, among other things, the internal eight-dimensional manifold is a Calabi-Yau fourfold, although the physical metric is not the Ricci-flat metric but conformally related to it. This class of M-theory compactifications is very important for F-theory [19] applications, since compactifications of F-theory are in fact defined through them by assuming that the internal manifold is an elliptically fibered Calabi-Yau manifold, see [20] for a review and further references.
By assuming that the warp factor is not a global function, we will be able to generalize the result of reference [10]: we will find that the internal manifold must be a locally con-
JHEP09(2015)178
formally Kähler manifold [21][22][23] locally equipped with a preferred Calabi-Yau structure. Evidently, standard Calabi-Yau manifolds are a particular case inside this class. Let us say that this note is of course not the first attempt to obtain admissible F-theory backgrounds beyond the Calabi-Yau result; see references [24,25] for applications of Spin(7)-manifolds to F-theory compactifications.
It turns out that the solution that is obtained by assuming that the warp factor is not globally defined belongs to a simple class of non-geometric compactification backgrounds, and this is the approach that we will use in section 4. By non-geometric solution we mean here a global solution obtained by patching up local solutions to the equations of motion by means of local diffeomorphisms, gauge transformations, and global symmetries of the equations of motion, namely U-dualities. Notice that the term non-geometric is somewhat misleading since, although there is no guarantee that the global of a non-geometric solution is a smooth differentiable manifold, it will be for sure a well-defined mathematical object, with well-defined topological an geometric properties. We will use anyway the term nongeometric since it is widely used in the literature.
Non-geometric compactification backgrounds have been intensively studied in the literature from different points of view, see for example [26][27][28][29] for more details and further references. References [30,31] consider compactifications that are non-geometric from the Heterotic point of view and that become geometric compactifications via duality with Ftheory. References [32][33][34][35] contain a very interesting approach, named there G-theory, along the main idea of this work: among other things, they provide a very detailed construction of non-perturbative vacua by gluing local solutions to the equations of motion using different types of U-dualities. When performing such a non-trivial global patching, it is usually very difficult to obtain precise results about the topological and geometric properties of the global space of the compactification. This is partly due to the fact that the symmetries of the local equations of motion involved in the global patching can be relatively involved. That is why here we will consider the arguably simplest non-geometric global patching of local solutions to the equations of motion of eleven-dimensional Supergravity on a warped compactification background to three-dimensional Minkowski space-time. In exchange, we will be able to fully characterize the topology and the geometry of the global space.
More precisely, we will consider local solutions to the eleven-dimensional Supergravity equations of motion and we will globally patch them using local diffeomorphisms, gauge transformations and the trombone symmetry of the warp factor, which simply consists on rescalings of the warp factor by a real constant. Therefore, the global symmetry of the equations of motion that we will use to patch the local solutions is the simplest one. The idea is to consider the simplest non-geometric scenario in order to be able to fully characterize topologically as well as geometrically the global space of the compactification, something of utmost importance in order to understand the moduli space of a non-geometric compactification space. Hence, we hope this compactification background will hep to understand the nature of non-geometric compactification spaces, starting from the simplest case. In fact, we will be able to show that the global space of the compactification is a differentiable manifold, but with topological and geometric properties drastically different from the corresponding standard geometric compactification backgrounds.
JHEP09(2015)178
Let us be more precise. In this letter we will prove, among other things, that: • The non-geometric compactification space M is a particular class of compact complex manifolds admitting a Kähler covering with deck transformations acting by holomorphic homotheties with respect to the Kähler metric. In other words, M is a particular type of locally conformally Kähler manifold. Therefore, M admits a Käler covering M with Kähler formω, fitting in the following short sequence: The non-geometric warp factor is encoded in the geometry of M in an elegant way. Given a 2d-dimensional locally conformally Kähler manifold (M, ω, θ) with Kähler form ω and closed Lee-form θ, let L the trivializable flat line bundle associated to the The line bundle L is usually called the weight bundle of M and its holonomy coincides with the character χ : π 1 (M ) → R + . The image of χ is called the monodromy group of M . The warp factor is given by a flat connection of L which, after choosing a trivialization, is given by a closed one-form on M . If M is simply-connected its holonomy is trivial and then M becomes a Kähler manifold and the compactification becomes geometric.
• The non-geometry of the solution is then associated to the space being non-simplyconnected. If we take M to be simply connected, then M becomes a Kähler manifold and we obtain a standard geometric solution.
• We obtain an explicit solution, preserving locally N = 2 supersymmetry, on a complex Hopf four-fold that solves all the local equations of motion of the theory, including the equation of motion for the wrap factor. We explicitly write the local metric, flux and warp factor.
• The previous solution evades the Maldacena-Nuñez theorem by means of a mechanism originally developed by Mario García-Fernández and the author for Heterotic Supergravity, and thus there are non-geometric solutions with non-zero warp factor and flux without the need of higher derivative corrections.
• The explicit solution on the complex Hopf four-fold is equipped with a holomorphic elliptic fibration over a Kähler base. This points out to a possible application of this backgrounds to F-theory compactifications.
• The explicit solution on the complex Hopf four-fold admits a codimension-one foliation equipped with a nearly-parallel G 2 structure on the leaves. Then, the solution, even non-geometric, preserves the structure of the most general geometric compactification background of eleven-dimensional Supergravity on an eight-manifold, studied in references [16,17,36,37].
JHEP09(2015)178
In addition, the moduli space of locally conformally Kähler manifolds is usually very restricted, so compactification on this backgrounds may partially evade the moduli stabilization problem, present in many String Theory compactifications.
To summarize, we think this kind of non-geometric backgrounds is simple enough to be manageable, in particular it is possible to study their global topological and geometric properties, yet it is an honest non-trivial non-geometric compactification background. Therefore it might be a good starting point to a systematic rigorous study of non-geometric Supergravity backgrounds. This letter is a first small step in that direction.
The consequences of compactifying M-theory on a locally conformally Kähler manifold instead of a Calabi-Yau four-fold are manifold since the former is not Ricci-flat in a compatible way and has different topology than the latter. This deserves further study. In particular we think that it would be interesting to obtain, if possible, the effective action of a M-theory compactification on a non-Calabi-Yau locally conformally Kähler manifold.
The plan of this paper goes as follows. In section 2 we review, following [10], the analysis of M-theory compactifications to three-dimensional Minkowski space-time preserving N = 2 supersymmetries, pointing out in a precise way the well-known issue of imposing at the same time the classical Killing spinor equations and the l P -corrected equations of motion, an issue that is not present in the non-geometric setting since the Maldacena-Nuñez no-go theorem does no hold and thus there is no need of considering l P -corrections in order to have non-trivial solutions. In section 3 we modify the procedure explained in section 2 by considering a warp-factor which is not a globally defined function on the internal manifold. In section 4 we reinterpret the previous construction as a non-geometric compactification background. In section 5 we construct the non-geometric compactification and we obtain an explicit solution to all the equations of motion, studying some of its properties. In particular, we show that it is equipped with a holomorphic torus fibration over a projective Kähler base and with a codimension-one foliation with nearly-parallel G 2 -leaves.
M-theory compactifications on eight-manifolds
In this note we are interested in a particular class of non-geometric M-theory compactification backgrounds to N = 2 three-dimensional Minkowski space-time. These type of non-geometric compactifications will be introduced in section 4. In this section we will consider standard M-theory supersymmetric solutions, in order to motivate how the nongeometric version of these solutions may be useful in evading some of the issues present in the standard M-theory supersymmetric compactification case, such as the Maldacena-Nuñez no-go theorem [38]. The effective, low-energy, description of M-theory [39] is believed to be given by eleven-dimensional N = 1 Supergravity [40], which we will formulate on an eleven-dimensional, oriented, spinnable, differentiable manifold 2 M . We will denote by S → M the corresponding spinor bundle, which is a bundle of Cl(1, 10) Clifford modules.
JHEP09(2015)178
At each point p ∈ M we thus have that S p is a thirty two real, symplectic, Cl(1, 10) Clifford module, 3 with symplectic form ω.
The field content of eleven-dimensional Supergravity is given by a Lorentzian metric g, a closed four-form G ∈ Ω 4 cl (M) and a Majorana gravitino Ψ ∈ Γ S ⊗ Λ 1 (M) . We will focus only on bosonic solutions (M, g, G) of the theory, so we will truncate the gravitino. The classical bosonic equations of motion are given by: is a symmetric (2, 0) tensor. Eleven-dimensional Supergravity supersymmetric bosonic solutions, and in particular supersymmetric compactification backgrounds, are defined as being solutions of eleven-dimensional Supergravity admitting at least one real spinor ǫ ∈ Γ(S) such that: where D is the Supergravity connection acting on the bundle of Clifford-modules S. It is given by: Here ∇ is the spin connection induced from the Levi-Civita connection on the tangent bundle and · denotes the Clifford action of forms on sections of S. A supersymmetric configuration (M, g, G), namely a manifold admitting a D-constant spinor, does not necessarily solves the eleven-dimensional Supergravity equations of motion, but it is in some sense not far from being a solution, since the integrability condition of (2.3) can be written in terms of the equations of motion of the theory. The integrability condition of (2.3) can be found to be: where E 0 denotes the Einstein equation and F 0 denotes the Maxwell equation of elevendimensional Supergravity, see (2.1). Supersymmetric solutions of eleven-dimensional Supergravity can be divided in two classes, the time-like class and the null class, see references [41,42], where the classification of supersymmetric solutions of eleven-dimensional Supergravity was obtained. The time-like class is given by the supersymmetric solutions that satisfy: [41]. In other words, the Einstein equations follow from supersymmetry and the Maxwell equations. Hence, as it is well known in the literature, supersymmetry is closely related to the equations of motion but it does no always imply them. Supersymmetric compactification backgrounds are indeed time-like supersymmetric solutions of eleven-dimensional Supergravity. Compactification backgrounds of eleven-dimensional Supergravity are subject to the Maldacena-Nuñez no-go theorem [38], which we state here for completeness, applied to eleven-dimensional Supergravity.
Theorem 2.1. Every warped compactification of eleven-dimensional Supergravity on a closed manifold necessarily has constant warp factor and zero four-form flux G.
Therefore it would seem that if we want to define F-theory compactifications through eleven-dimensional Supergravity compactifications on an eight-dimensional manifold we will end-up having only the trivial flux-less solution. The standard way to evade the Maldacena-Nuñez theorem is to include in the theory higher-derivative corrections and/or negative-tension objects. Since it is not clear whether negative-tension objects exist in M-theory, the strategy of reference [10] was to include the particular higher-derivative correction to eleven-dimensional Supergravity which was known at the time and which gives a negative contribution to the energy-momentum tensor of the theory. This correction was computed for the first time in the one obtained in reference [44]. By means of M/F-Theory duality, higher-derivative corrections to M-theory and negative-tension objects in String Theory are dual manifestations of the same phenomena [43]. 4 The only dimensionfull parameter in eleven-dimensional Supergravity is the Planck-length l P and the higherderivative corrections of M-theory arise in an expansion in powers of this constant over the relevant length-scale of the the problem under consideration. For example, the higherderivative term considered in [10] is a l 6 P -correction. For simplicity from now on we will refer to the higher-derivative corrections of M-theory as l P -corrections.
The correction to the Killing spinor equation (2.3) corresponding to the correction considered in [10] is not known, so the analysis performed in [10] uses the classical Killing spinor equations and at the same time imposes l P -corrected equations of motion. This immediately runs into a possible inconsistency, since classical supersymmetry is consistent with the classical equations of motion through the integrability condition of the Killing spinor equation, so imposing l P -corrected equations of motion on a classical supersymmetric configuration leads to extra constraints that make the problem over determined. The possible inconsistency can be computed explicitly. Let E and F denote the l P -corrected Einstein and Maxwell equations of motion. They can be written as: where E 1 and F 1 denote the corresponding corrections to the classical equations of motion E 0 and F 0 , and which include the appropriate l P factors. Now, in order to study the consistency of imposing the l P -corrected equations of motion E and F as well as classical
JHEP09(2015)178
supersymmetry, we only have to assume that we indeed have a solution of l P -corrected equations of motion and compute what is the extra-constraint that appears when imposing the integrability condition of the classical Killing spinor equation. The result is, for every v ∈ X(M ), given by: Therefore, if we want a solution of the classical Killing spinor equation to be a solution of the l P -corrected equations of motion, the constraint (2.8) must be necessarily satisfied.
The outcome of the analysis of reference [10] is that classical supersymmetry imposes the manifold to be a Calabi-Yau four-fold, although the physical metric does not correspond to the Ricci-flat Calabi-Yau metric. Strictly speaking then, if we want to have a solution of the l P -corrected equations of motion, not every such Calabi-Yau is an admissible compactification background: only those satisfying equation (2.8), if any, should be considered as honest solutions of the equations of motion. Let us be more explicit for the case of [10]. In reference [10] the equations of motion of classical eleven-dimensional Supergravity were modified by the only known l P -correction at the time, obtained in reference [44], and which only affects the equation of motion for G. Hence, E 1 = 0 and F 1 is given by: where p 1 and p 2 are respectively the first and second Pontryagin classes of M , and β is an appropriate constant. Plugging equation (2.9) into equation (2.8) we obtain the explicit constraint that the Calabi-Yau four-folds coming out of the supersymmetry analysis of [10] have to satisfy in order to be an honest solution of the corrected equations of motion: Hence, and again strictly speaking, equation (2.10) constrains the class of admissible Ftheory compactification manifolds. Admissible in the sense of honestly solving the equations of motion of l P -corrected eleven-dimensional Supergravity and at the same time satisfying the classical Killing spinor equation of eleven-dimensional Supergravity. Of course, this problem is well-known to experts on the field, but unfortunately, as long as the elevendimensional Supergravity l P -corrected Killing spinor equation is not known, it seems not possible to solve it in a completely rigorous way. Important steps in this direction have been made in references [45][46][47], where a thoroughly and consistent analysis of M-theory compactifications in the presence of l P -corrections has been made, and even an educated guess for the l P -corrected Killing spinor equation has been proposed. Remarkably enough, the integrability condition of the l P -corrected proposal for the Killing spinor equation is compatible with the l P -corrected equations of motion, which definitely suggests that if the educated guess is not already the correct l P -corrected Killing spinor equation, it cannot be far from being it. One of the main conclusions of [46] is that even when one consistently takes into account l P -corrections, the internal manifold of the compactification is still topologically a Calabi-Yau four-fold. This strongly suggests that the conclusion of reference [10] is solid after properly taking into account l P -corrections.
JHEP09(2015)178
A possible, temporary, solution to the problem of imposing classical supersymmetry and l P -corrected equations of motion, would be to consider only the elliptically fibered Calabi-Yau four-folds, if any, that satisfy the constraint (2.8). This way we would be sure that we are dealing with honest solutions to l P -corrected eleven-dimensional Supergravity and at the same time it would single out a preferred class of eliptically fibered Calabi-Yau manifolds.
In this letter we are going to propose a simple class of twisted compactifications that directly evades the Maldacena-Nuñez theorem at the classical level and admits an interpretation as non-geometric compactification backgrounds. Therefore, no l P -corrections are needed to obtain non-trivial solutions, and thus no inconsistency arises, since there exist closed manifolds with non-trivial flux and warp factor that solve the equations of motion of the theory at the classical level. We don't want to imply with this that l P -corrections are not relevant: they certainly are of utmost importance in order to understand String/Mtheory backgrounds. However, we think that it may be a good idea to understand first non-geometric backgrounds without corrections, namely the zero-order solution, before considering l P -corrections to non-geometric backgrounds. The non-geometric solutions presented in this letter thus constitute the zero order non-geometric solution, which happens to be non-trivial, in the sense that it allows for non-trivial flux and warp-facor, in contrast to what happens in the geometric case. Let us stress though that ideally the ultimate goal would be to include and understand l P -corrections for geometric as well as for non-geometric compactification backgrounds.
N = 2 compactifications
In this section we briefly review the standard analysis, following the seminal reference [10], of supersymmetric M-theory compactifications to three-dimensional Minkowski space-time preserving N = 2 supersymmetry. We will consider the space-time to be an elevendimensional oriented spin manifold M . The supersymmetry condition corresponds to the vanishing of the Rarita-Schwinger supersymmetry transformation: where ǫ ∈ Γ(S) is the spinor generating the supersymmetry transformation and D : Γ(S) → Γ(T * M ⊗ S) is the eleven-dimensional Supergravity Clifford-valued connection given in terms of g and G. For M-theory compactifications we consider the space-time to be a topologically trivial product of three-dimensional Minkowski space R 1,2 and an eightdimensional Riemannian, compact, spin manifold M 8 The metric and the four-form are taken to be given by
JHEP09(2015)178
where ∆ ∈ C ∞ (M 8 ) is a function, δ 1,2 and Vol are the Minkowski metric and the volume form in R 1,2 , g is the Riemannian metric in M 8 , and G ∈ Ω 4 (M 8 ) is a closed four-form in the internal space. Finally, the supersymmetry spinor is decomposed as where S 1,2 is the rank-two real spinor bundle over R 1,2 and S 8 is the real, positivechirality, rank-eight spinor bundle over M 8 . We can form a complex pure spinor η as is the complex, positive-chirality, spin bundle over M 8 . Imposing the previous structure on M , together with supersymmetry condition (2.11), imposes restrictions on the flux G and constrains (M 8 , g) at the topological as well as the differentiable level [10]: • M 8 is equipped with a SU(4)-structure induced by η 1 and η 2 , which we assume everywhere independent and non-vanishing. The topological obstruction for the existence of nowhere vanishing real spinor, or in other words, the existence of a Spin (7)structure is given by where p 2 1 and p 2 are the integrated P 2 1 and P 2 Poyntriagin classes, and χ(M 8 ) is the Euler characteristic of M 8 .
• Let us make the following conformal transformatioñ The usefulness of this conformal transformation comes from the fact that the transformed spinors are constant with respect to the transformed connection, namelỹ We can also see that M 8 is a Calabi-Yau four-fold as follows, which might be more natural from the algebraic-geometry point of view: g,ω,J is the compatible triplet JHEP09(2015)178 of a complex structureJ, a symplectic structureω and a Riemannian metricg making M 8 into a Kähler manifold. SinceΩ is an holomorphic (4,0)-form, the canonical bundle is holomorphically trivial, which together with the Kähler property of M 8 , implies that it is a Calabi-Yau four-fold.
• The one-form ξ is given by de derivative of the warp factor ∆ as follows 21) and the four-form G is subject to the constraint Once we know that M 8 is a Calabi-Yau four-fold, equation (2.22) can be solved by taking G to be (2, 2) and primitive.
From the previous analysis we conclude that if we take M 8 to be a Calabi-Yau manifold, G ∈ H (2,2) (M 8 ) and primitive and ξ as in equation (2.21), we solve the supersymmetry conditions (2.11) and we obtain a supersymmetric compactification background of elevendimensional Supergravity to three-dimensional Minkowski space. Note that the physical metric g is conformally related to the Ricci-flat metricg, and by Yau's theorem we know that this is the unique Ricci-flat metric in its Kähler class, and thus it is, strictly speaking, the Calabi-Yau metric of M 8 .
Global patching of the local supersymmetry conditions
In this section we are going to slightly generalize the set-up reviewed 2 by considering a situation where the conformal transformation (2.17) cannot be performed globally, but only locally. We will be still satisfying the eleven-dimensional Supergravity supersymmetry conditions, which are local, but globally we will be able to construct a manifold that is not necessarily a Calabi-Yau four-fold but of a more general type. As we did in section 2, we will consider the space-time to be a topologically trivial product of three-dimensional Minkowski space R 1,2 and an eight-dimensional Riemannian, compact spin manifold The supersymmetry spinor is also decomposed exactly as it was done in section 2, namely Hence, as it happened in section 2, M 8 is equipped with two everywhere independent and non-vanishing Majorana-Weyl spinors, which implies again that the structure group of M 8 can be reduced to SU(4). Therefore {U a , ∆ a , ξ a } a∈I on M 8 . We will assume that the Lorentzian metric g and the four-form G can be written, for every open set U a ⊂ M , as follows: where g is a Riemannian metric in M 8 and G is a closed four-form in M 8 . In order to keep a clean exposition, we are not explicitly writing the atlas that we are using for R 1,2 , which, for each U a consists of an open set which we take to be the whole R 1,2 and its corresponding coordinate system φ a . More precisely, the atlas that we are considering for the topologically trivial product M = R 1,2 × M 8 is the following: where V a = R 1,2 for every a ∈ I, φ a are the coordinates in V a and ψ a are the corresponding local coordinates in U a . The atlas A is obviously not the simplest atlas for M , but anyway it is an admissible atlas which gives M the structure of a differentiable product manifold. We will see in a moment that the consistency of the procedure requires very specific changes of coordinates φ a The one-form ξ is given again by equation (2.21), only this time the result is valid locally in U a : Now, in order for the physical fields (g, G) to be well defined, they must be tensors on M . This is equivalent to, given any another open set U b such that U a ∩ U b = ∅, the following condition in U a ∩ U b : (3.6) Equation (3.6) is equivalent to: in U a ∩ U b , up to of course a change of coordinates, which in turn is reflected as a symmetry of the equations of motion. Therefore, we must define the difference between ∆ a and ∆ b in U a ∩ U b to be such that it can be absorbed by means of a coordinate transformation in R 1,2 . The only possibility is: Indeed, the multiplicative factor (3.8) can be absorbed by means of the following change of coordinates in R 1,2 :
JHEP09(2015)178
which is of course a diffeomorphism. It can be easily seen that where the second equation holds in U a ∩ U b ∩ U c = ∅. Therefore, the following data The crucial difference from the situation that we encountered in section 2 is that the conformal transformation (2.17) cannot be performed globally. Therefore, we cannot perform the conformal transformation that transforms the quadruplet {g, J, ω, Ω} into a Calabi-Yau structure in M 8 , which thus cannot be taken to be a Calabi-Yau four-fold; in particular, the supersymmetry complex spinor is not constant respect to any Levi-Civita connection associated to a metric in the conformal class of the physical metric. We can however perform the conformal transformation locally on ever open set U a , and thus we defineg where nowg a andη a are locally defined on U a . The local conformal transformation (3.14) implies, again locally in U a , that Notice that J is invariant and thus its conformal transformed is a well defined tensor on M 8 . An alternative characterization of these locally defined objects is through globally defined tensors taking values on the corresponding powers of the flat line bundle L, namelỹ g ∈ Γ(S 2 T * , L) ,η ∈ Γ(S C 8 , L Once we go to the locally transformed spinor and metric, we have that Hence, we can think of g,ω a ,J a ,Ω a , as a sort of preferred local Calabi-Yau structure in U a , which however does not extend globally to M 8 . We can withal obtain globally defined differential conditions in M 8 which, as we will see later, implies that the geometry of M 8 belongs to a particular class of locally conformally Kähler manifolds. Notice thatJ is a well-defined almost-complex structure; nonetheless it is not covariantly constant since the Levi-Civita connection in (3.18) is only defined locally in U a , asg a is only locally defined in U a . In spite of this, we can prove the following: Proof. Let N denote the Nijenhuis tensor associated to J. Then, on every open set U a ⊂ M 8 we can locally write N as follows (3.19) and thus N | Ua = 0 since J is covariantly constant respect to the locally defined Levi-Civita connection∇ a . Since this can be performed in every open set of the covering {U a } a∈I of M , we conclude that N = 0 and hence J is a complex structure. Since the metric g is compatible with J, (M 8 , g, J) is an Hermitian manifold.
Hence, we conclude that M 8 is a complex Hermitian manifold. There is another global condition that we can extract from (3.18) and which will further restrict the global geometry of M 8 . Equation (3.18) implies that on every open set U a we can find a function, namely ∆ a , such that d(∆ a ω)| Ua = 0 . (3.20) The key point now is that the de-Rahm differential does not depend on the locally-defined Levi-Civita connection∇ a and therefore we can actually extend equation (3.20) to an equivalent, well-defined, global condition in M 8 . Equation (3.20) is equivalent to Given another open set U b such that U a ∩ U b = ∅ we have that log ∆ a = log ∆ b + log λ ab at the intersection and therefore d log Therefore ω is a locally conformal symplectic structure [48] on M 8 and thus we have proven the following result:
The closed one-form [ϕ] ∈ H 1 (M 8 ), which is usually called the Lee-form, is precisely a flat connection in L → M 8 . Alternatively, one can define the ϕ-twisted differential d ϕ = d − ϕ whose corresponding cohomology H * ϕ (M 8 ) is isomorphic to H * (M 8 , F ϕ ), the cohomology of M 8 with values in the sheaf of local d ϕ -closed functions. Very good references to learn about locally conformally Kähler geometry are the book [49] and the review [50].
Solving the G-form flux
In order to fully satisfy supersymmetry, we have to impose on the four-form G the constraint In the Calabi-Yau case, this constraint was solved by taking G to be (2, 2) and primitive. In our case M 8 is not a Calabi-Yau manifold but it is a Hermitian manifold and hence it is equipped with a complex structure J and a compatible metric g. This turns out to be enough, as we will see now, to conclude that indeed the same conditions, namely G to be (2, 2) and primitive, solve equation (3.24) in our case. First of all, since we will use this fact later, notice that taking into account that η has positive chirality then equation (3.24) implies that G is self-dual in M 8 . Using the Clifford algebra Cl(8, R) relations together with the expresion of g as bilinear of η, it can be shown that [10]: Γāη = Γ a η = 0 , Taking now into account that G is self-dual, we can rewrite equation (3.29) as and hence we finally conclude that G is primitive and G ∈ H (2,2) (M 8 ).
The tadpole-cancellation condition
In order to allow for a non-zero G-flux in M 8 , we have to consider l P -corrections to elevendimensional Supergravity, due to the well-known no-go theorem of reference [38]. We will perform the calculation in this section in order to illustrate that although {ξ a } a∈I is not a well-defined one-form in M 8 , due to the fact that G is an honest tensor in M , the calculation can be carried out, and since M 8 is topologically Spin (7), we obtain the same result as in the standard case. The relevant correction for our purposes is given by [44] where G 4 = dC 3 and X 8 is an eight-form given by The corrected equation of motion for the four-form G adapted to the compactification background and written on M 8 reads where X 8 can be rewritten in terms of the first and second Pontryagin forms of the internal manifold [51] and β is an appropriate constant that we will not need explicitly. Notice that ϕ is a oneform locally given by the derivative of the corresponding local warp factor but it cannot be written globally as the derivative of a function, yet it is a well defined closed one-form in M 8 . Asuming that M 8 is closed, we integrate equation (3.33) to obtain Using now that M 8 has a SU(4)-structure and in particular it satisfies equation (2.15), we a result that was to be expected since it only depends on M 8 being equipped with a Spin(7)-structure.
A class of non-geometric M-theory compactification backgrounds
In section 3 we have proposed a twist in the standard gluing of the local equations of motion of eleven-dimensional Supergravity on eight-manifolds, by means of the use of a particular atlas on the space-time manifold. In this section we are going to adopt a different point of view, proposing a slightly modified construction, which highlights the interpretation of such twisted supersymmetric compactification backgrounds as non-geometric compactification JHEP09(2015)178 backgrounds. As a result, we will obtain that the total space of the non-geometric solution is still a manifold, although necessarily non-simply-connected, and that the Supergravity fields become tensors taking values on a particular line bundle.
Remark 4.1. The idea is to consider the local analysis of reference [10] and patch it globally in a non-trivial way by using not only local diffeomorphisms but also the trombone symmetry of the warp factor. We will see that when performing this non-trivial patching the global space is still a manifold, but with very different geometric properties and topology from the standard solution of reference [10].
The starting point is the standard one for compactification spaces. We will assume that the space-time manifold M can be written as a topologically trivial direct product where R 1,2 is three-dimensional Minkowski space-time and M 8 is an eight-dimensional, Riemannian, compact, oriented, spinnable manifold. According to the product structure (4.1) of the space-time manifold M, the tangent bundle splits as follows 5 is a good open covering of M . We define in M a family g = {g a } a∈I of local the Lorentzian metrics, where g a is a locally defined metric on R 1,2 × U a , given by: where g 8 is a Riemannian metric on M 8 and ∆ a ∈ C ∞ (U a ). Similarly, we define in M a family G = {G a } a∈I of local closed four-forms, where G a is a locally defined closed four-form on R 1,2 × U a , given by: where Vol is the standard volume form of Minkowski space. The idea now is to impose, for every a ∈ I, that each (g a , G a ) solves the local equations of motion of eleven-dimensional Supergravity. Then, we will patch this solutions globally by using not only local diffeomorphisms but also a particular global symmetry of the equations of motion. As we will see in a moment, the global geometry of M will depend on the specific patching used for the family of local solutions. More precisely, for each a ∈ I of the good open cover U = {U a } a∈I of M 8 , let us denote by:
JHEP09(2015)178
a local solution to the equations of motion of the theory, in the compactification background explained above. Notice that, in contrast to ∆ a and ξ a , which are defined only locally, g 8 | Ua and G| Ua are just the restriction of the globally defined tensors g 8 and G to U a , so they are well-defined globally. Now, a standard compactification would construct a global solution to the equations of motion by patching globally the family of local solutions {Sol a } a∈I using just local diffeomorphisms. This way we would obtain a globally well-defined metric g and four-form G on M . On the contrary, a non-geometric compactification is characterized by patching-up local solutions by using not only local diffeomorphisms but also symmetries of the equations of motion. What we did in section 3 was to patch up the global solution using local diffeomorphisms and also a particular symmetry of the equations of motion: the trombone symmetry of the warp factor, consisting on rescalings of the warp factor by a constant. In section 3 we used a very particular atlas in order to obtain that the Supergravity fields are tensors. We will drop here that condition and we will adopt the natural point of view of a non-geometric compactification: the global Supergravity fields obtained by the non-trivial patching of the local solutions may not be tensors but objects of a more general type. In our case we will obtain that the Supergravity fields are tensors valued on a particular line bundle L.
Hence, the kind of compactification backgrounds described in section 3 can be interpreted as being non-geometric, although of a simple type, namely the symmetry used to patch-up the solution globally is a simple rescaling of the warp factor. Remarkably enough, the global space of the compactification is still a manifold, something that is not guaranteed for more general non-geometric compactifications. Let us do then the global patching explicitly. Given the good open cover U , for each U a ∈ U we have a locally defined warp factor ∆ a ∈ C ∞ (U a ). As we have said, two local warp factors ∆ a and ∆ b , a, b ∈ I are related by a rescaling of the warp factor on the non-empty intersection U a ∩ U b = {∅} of U a and U b . Then we have: which as we have said is a symmetry of the equations of motion, as it is required to obtain a global solution. Equation (4.7) implies that: where the last equation holds on the triple non-empty triple intersection U a ∩U b ∩U c = {∅}. Hence: defines a real line bundle L over M . The warp factor it is thus globally given by a section ∆ ∈ Γ(L). Although ∆ is not a globally defined function on M 8 , it does define a globally defined closed one form ϕ ∈ Ω 1 (M 8 ), given on every open set U a ∈ U by: (4.10) Hence, we have obtained what is the global structure of the warp factor: after trivializing L, it is it is given as closed one-form by a connection on L.
JHEP09(2015)178
We have to patch-up now the local solutions {Sol a } a∈I of the the theory. We are not interested in patching up the most general local compactification background, but only the N = 2 supersymmetric compactification backgrounds of reference [10]. Therefore, each solution Sol a , a ∈ I, will be a local solution of the type presented in [10], namely conformal to a Calabi-Yau four-fold. Therefore, from reference [10] we obtain that: is equipped with a local SU(4)-structure (J a , ω a , Ω a ) satisfying: where J a is a local complex structure, ω a is a local symplectic structure, Ω a is a local (4, 0)form and ∇ a is the locally-defined Levi-Civita connection associated to g a = ∆ a g 8 | Ua . In other words, (J a , ω a , Ω a ) is a local integrable SU(4)-structure. In addition: where G| Ua is (2, 2) with respect to J a . (2), supersymmetric compactification backgrounds are time-like supersymmetric solutions, it is enough to satisfy the Maxwell equations for G in order to satisfy all the equations of motion.
Remark 4.2. As we explained in section
Using now that the global patching is performed by means of only local diffeomorphisms and the trombone symmetry, together with the results of reference [10], we obtain that, for each a ∈ I, the local SU(4)-structure (J a , ω a , Ω a ) can be written as: (4.14) where (g 8 , J, ω, Ω) is a global SU(4)-structure on M 8 , namely J is an almost-complex structure, ω is the fundamental two-form and Ω is the (4, 0). In order to fully characterize the non-geometric compactification background, we have to obtain the geometry of M 8 from the local supersymmetry conditions (4.12), (4.13) and (4.14).
JHEP09(2015)178
Proof. From (4.14) we see that the local complex structures {J a } a∈I patch up to a welldefined almost-complex structure J in M 8 . Writing the Nijenhuis tensor of J as: we obtain that N | Ua = 0 for every U a ∈ U , and thus J is integrable and (M 8 , g 8 , J) is a Hermitian manifold. In addition, G is globally (2, 2) in M 8 . Since ω a is, for each a ∈ I, a rescaling of ω| Ua we obtain that ω a ∧ G| Ua = 0 and G ∈ Ω 2,2 (U a ) are equivalent to: Using now that ϕ| Ua = d log ∆ a , we obtain that the global form of the equation of motion for the warp factor is: Since J a is a complex structure, we obtain that the condition ∇ a ω a is equivalent to dω a = 0, which in turn is equivalent to: dω = ϕ ∧ ω . (4.20) Using now proposition 4.3, we have then proven the following theorem: Theorem 4.4. Let M 8 be an eight-dimensional compact manifold equipped with a SU(4)structure (J, ω, Ω) such that J is integrable, ω is a locally conformally Kähler structure with Lee-form ϕ and Ω is locally conformally parallel. Then, (M 8 , J, ω, Ω) is a non-geometric admissible M-theory compactification background to three-dimensional Minkowski spacetime provided that there exists a closed four-form G ∈ Ω 4 (M 8 ) such that:
21)
and a solution to the equation of motion: of the warp factor exists.
The non-geometric background that we have obtained is very different from the standard Calabi-Yau compactification background, as a result of the non-trivial global patching. The topology of both manifolds is completely different. Hence, we should expect the effective theories of the compactifications to be completely different too. In the next section we will indeed provide an explicit example that solves the equations of motion for ξ and G, giving thus a counterexample to the Maldacena-Nuñez no-go theorem. The Sueprgravity fields are no longer global tensors, but tensors taking values on the line bundle L. In fact, we have: To summarize, we have found a simple class of non-geometric M-theory backgrounds in which the total space is again a manifold and which:
These are properties that are expected to be present in non-geometric backgrounds. It is because of the second feature that we will be able to construct an explicit eight-dimensional non-geometric background which evades the Maldacena-Nuñez no-go theorem and thus evades any possible inconsistency coming from introducing l P -corrections in the equations of motion but not in the classical Killing spinor equations.
Locally conformally Kähler manifolds
We have obtained that the supersymmetric conditions on an eleven-dimensional Supergravity compactification to three-dimensional Minkowski space-time, locally preserving N = 2 Supersymmetry, allow for locally Ricci-flat, SU(4)-structure, locally-conformal Kähler manifolds as internal spaces. It is first convenient to introduce the following definition: Definition 5.1. A n-complex dimensional locally conformal Calabi-Yau manifold is SU(n)structure locally-conformal Kähler manifold with locally Ricci-flat Hermitian metric and locally conformally parallel (n, 0)-form.
Hence, the kind of SU(4)-structure locally-conformal Kähler manifolds that we have obtained as admissible M-theory compactification backgrounds are precisely locally conformal Calabi-Yau manifolds, which motivates the definition. These are not necessarily Calabi-Yau four-folds (which would be a special subclass) and thus it is worth characterizing their geometry. First of all let us summarize the main properties of a generic compact locally conformal Calabi-Yau manifolds: 1. M is a compact Hermitian manifold. In other words, it is a complex manifold with a Riemannian metric g compatible with the complex structure J of the manifold.
2. M is equipped with non-degenerate two-form ω constructed fromJ and g, which is not closed but satisfies dω = ϕ ∧ ω .
Then M is a particular case of almost-Kähler manifold.
3. Although ω is not closed, locally one can always transform it such that the locally transformed two-form is closed. Therefore M is a particular case of locally conformally symplectic manifold [48].
4. The Riemannian metric g is not Ricci-flat. Despite of this, locally one can find a Ricci-flat metric locally conformal to g.
5.
There is a globally defined complex spinor which is not constant respect to the Levi-Civita connection associated to g. However, we can make a conformal transformation on the spinor such that it becomes locally constant respect to the Levi-Civita connection associated to the locally transformed metric.
JHEP09(2015)178
6. M is equipped with a SU(n)-structure, or in other words, it has zero first Chern class in Z. However, the canonical bundle is not holomorphically trivial, as the (n, 0)form that topologically trivializes it is not holomorphic, but only locally conformally parallel.
7. M is not projective, in contrast to the Calabi-Yau case. This seemingly technical detail is important, since for example, algebraic-geometry tools are very much used in order to study F-theory on elliptically-fibered Calabi-Yau four-folds.
There are in the literature several definitions of Calabi-Yau manifolds, not always equivalent. For definiteness, and in order to compare compact Calabi-Yau manifolds with compact locally conformal Calabi-Yau manifolds, we will use the following two equivalent definitions • A compact Calabi-Yau manifold is a compact manifold of real dimension 2n with holonomy contained in SU(n).
• A compact Calabi-Yau manifold is a compact Kähler manifold with holomorphically trivial canonical bundle.
From the previous definitions we see that a locally conformal Calabi-Yau manifold fails to be Calabi-Yau by only two conditions, namely they are not Kähler and they do not have an holomorphic (n, 0)-form, although they are equipped with a (n, 0)-form topologically trivializing the canonical bundle. Contrary to what happens with compact locally irreducible Calabi-Yau manifolds, compact locally conformally Calabi-Yau manifolds can have continuous isometries. Let us consider the case of a generic locally conformally Kähler manifold M : it is equipped with two canonical vector fields v and u given by g(w, u) = ϕ(Jw) , g(w, v) = ϕ(Jw) , ∀w ∈ X(M ) . Therefore we see that if u is a Killing vector field, then u and v commute and thus they are the infinitesimal generators of a R × R-action on M . This is a nice starting point to end-up having a torus action and therefore a principal torus bundle on M , as explained in proposition 6.4 of [18], where the necessary and sufficient conditions for u and v to define a principal torus bundle where obtained.
JHEP09(2015)178
Now that we know that locally conformal Calabi-Yau manifolds are not necessarily Calabi-Yau, an explicit example of a non-Calabi-Yau locally conformal Calabi-Yau manifold is in order. A general a locally conformally Kähler manifold M can be written has follows [52]: whereM is a simply connected Kähler manifold, and G is a covering transformation group whose elements are conformal for the respective Kähler metric onM . This restricts the class of manifolds we can consider, but it is not enough to specify a manageable class.
Fortunately, it turns out that there is a class of locally conformally Kähler manifolds that has been completely characterized, namely those whose local Kähler metric is flat, thanks to the following proposition [52]: A generalized Hopf manifold is a locally conformally Kähler manifold such that its Lee-form is a parallel form. Among the generalized Hopf manifolds are of course the classical Hopf manifolds. Four-dimensional complex Hopf manifolds are equipped with a SU(4)-structure and in fact it is an example of a non-trivial compact locally conformal Calabi-Yau manifold. In particular the metric of a Hopf manifold is not only locally Ricciflat but locally flat.
Let us explore then the geometry of compact complex Hopf manifolds, since they provide us with a non-trivial example of locally conformal Calabi-Yau manifolds.
An explicit solution on a complex Hopf manifold
A complex Hopf manifold CH m α of complex dimension m is the quotient of C m \ {0} by the free action of the infinite cyclic group S α generated by z → αz, where α ∈ C * and 0 < |α| < 1. In other words, it is C m \ {0} quotiented by the free action of Z, where Z, with generator α acting by holomorphic contractions. The group S α acts freely on C m \ {0} as a properly discontinuous group of complex analytic transformations of C m \ {0}. Hence, the quotient space: is a complex m-fold. It can be shown that complex m-dimensional Hopf manifolds CH m α are diffeomorphic to S 1 × S 2m−1 . As a result: The (1,1)-form ω 0 is not closed but it satisfies: where: Since g 0 , ω 0 and ϕ 0 are invariant under S α transformations, they descend to a well defined metric g and (1,1)-form ω in CH m α , with corresponding Lee-form ϕ. In C m \ {0} we have that ϕ 0 is exact, since ϕ 0 = d log z tz . This should be expected, as (g 0 , ω 0 ) is globally conformal to the standard Kähler structure on C m \ {0}. However, ϕ is not exact in CH m α , since there log z tz is not well-defined there. Let (U a , z a ) be a coordinate chart in CH m α .
The non-geometric solution. Let us take now m = 4, and α =ᾱ. Then CH 4 α is an eight-dimensional manifold of the type just described. In particular, it is equipped with a locally conformally Kähler structure (g, ω) induced by the quotient of the (g 0 , ω 0 ) given in equation (5.6). When α is real we can define in addition another globally defined (4,0)-form, induced by the following form on C m \ {0}: Now, since α is real, Ω 0 is S α invariant and therefore it induces a globally defined (4,0)-form Ω on CH 4 α satisfying: ∇ a Ω a = 0 , (5.10) and in particular: dΩ = 2ϕ ∧ Ω . (5.11) Therefore (g, ω, Ω) is precisely a locally conformally Calabi-Yau structure on CH 4 α , which is what was required by supersymmetry, see theorem 4.4. Therefore, in order to obtain a full non-geometric solution, we just have to solve the equation of motion for G = {G a } a∈I . Notice that, as we explained in section 4, local supersymmetry imposes: and that the only equation of motion that remains to be solved is the equation of motion for the warp factor, namely In order to solve it, we are going to take G = 0. Notice that this does not trivialize the flux G since there is still a part with one leg on M 8 . Taking G = 0 we obtain that the equation of motion for the warp factor reduces to: Since ϕ is already closed, this means that ϕ must be harmonic, in order to solve equation (5.14). It turns out that ϕ is indeed harmonic; which, since it is already closed, is the same as requiring g 8 to be the Gauduchon metric. Therefore: is a compact non-geometric solution of eleven-dimensional Supergravity with non-trivial flux and warp-factor. From a different point of view, one can see that Sol is locally conformal to flat space equipped with the standard Calabi-Yau structure and therefore it trivially solves the supersymmetry equations. Globally however the geometry is very different and that in turn allows for the existence of a non-trivial flux and warp factor. We could say then that the non-trivial warp-factor and flux are supported by the non-geometry of the solution.
Remark 5.5. In the standard compactification scenario, where instead of ϕ we have the derivative of the warp factor, say df , where now f is a globally defined function on M 8 , the equation of motion of the warp factor becomes, after setting G equals to zero: Since M 8 is closed then f must be constant. In our non-geometric case however, we get a harmonic one-form ϕ, so as long as the first betti number of M 8 is bigger or equal than one, we are guaranteed to have at least one non-trivial solution.
For completeness, let us write locally the warp factor and flux in local coordinates: let (U a , z a ) be a local chart of CH 4 α . Then we have that: and thus the warp factor of eleven-dimensional Supergravity compactified on CH 4 λ is, at every coordinate chart (U a , z a ), given by: Therefore, locally the four-form flux is given by: Remark 5.6. In section 3, a very particular atlas was used in order to make {G a } a∈I a globally defined tensor on M . However, from the point of view of a non-geometric compactification, we do not need to perform such an artificial construction. For non-geometric compactifications the global objects that locally correspond to the fields of the theory are not expected to be standard tensors. In this case G can be understood as a four-form taking values on a real line bundle L: The real line bundle L twists G from being a standard four-form and this is the result of the non-trivial global patching of the solution.
JHEP09(2015)178
The solution Sol = CH 4 α , g 8 , ω, Ω, ϕ that we have obtained, although the simplest of its kind, has very interesting properties, some of them shared also by more general locally conformally Kähler manifolds. In particular, it is equipped with a holomorphic torus fibration and a transversely orientable, codimension one, real foliation with a G 2 -structure on the leaves. Therefore, Sol has the geometric properties found in [16,17] for the most general N = 1 supersymmetric compactification of eleven-dimensional Supergravity to three-dimensional Minkowski space-time. This will be the subject of the next section.
Foliations and principal torus fibrations on Vaisman manifolds
Let (M, ω) be a Vaisman manifold, namely (M, ω) is a locally conformally Kähler manifold with a parallel Lee-form θ. Since θ is parallel, if it is non-zero at one point, it is non-zero at every point. Notice that the Hopf manifold that we found in section 5.1 to satisfy the local equations of motion of eleven-dimensional Supergravity is a particular example of Hopf manifold. A Vaisman manifold (M, ω) is equipped with four canonical foliations, which are defined on (M, ω) by means of the Lee-form θ and the complex structure J of M as follows [49,53]: • (M, ω) is equipped with a completely integrable and regular codimension-one distribution F ⊂ T M , given by θ = 0. We will denote by F the corresponding foliation, which is totally geodesic.
• (M, ω) is equipped with a completely integrable and regular dimension one distribution D ⊂ T M given by the vector field v = θ ♯ . We will denote by D the corresponding foliation, which is a geodesic foliation.
• (M, ω) is equipped with a completely integrable and regular dimension one distribution D ⊥ ⊂ T M given by the vector field w = J · v. We will denote by D ⊥ the corresponding foliation. Notice that the distribution D ⊥ is perpendicular to D, and hence the symbol used.
• (M, ω) is equipped with a completely integrable and regular dimension two distribution T = D ⊥ ⊕ D ⊂ T M . We will denote by T the corresponding foliation. The foliation T is a complex analytic foliation whose leaves are parallelizable complex analytic manifolds of complex dimension one. The leaves are totally geodesic, locally Euclidean submanifolds of M and the foliation is Riemannian.
If the foliation T is regular, as it happens for the solution Sol = CH 4 α , g 8 , ω, Ω, ϕ that we found in section 5.1, then the following result holds [49,53].
Theorem 5.7. If the foliation T on a compact Vaisman manifold (M, ω) is regular then: • The leaves are totally geodesic flat torii.
• The leaf space M = M/F is a compact Kähler manifold.
• The projection π is a locally trivial fibre bundle.
JHEP09(2015)178
Therefore, compact Vaisman manifolds with regular foliation T are equipped with a non-trivial torus principal bundle over a Kähler manifold, in the line of the suggestion made in [18]. This is interesting, because for F-theory applications one needs the eightdimensional compactification manifold to admit a elliptic fibration, which must be singular to be non-trivial since the compactification space is an irreducible Calabi-Yau four-fold, over a Kähler base. In our case the fibration can be non-trivial yet non-singular, and that is indeed the case of the solution of section 5.1. The interpretation, if any, of such non-singular and non-trivial fibrations in the context of F-theory remains unclear. This of course does not mean that there are no locally conformally Kähler four-folds admitting singular elliptic fibrations; this is currently an open problem.
On the other hand, a compact Vaisman manifold admits a topological Spin (7)structure, and in particular it is spin, as a consequence of having all the Chern numbers equal to zero. This Spin(7)-structure induces a G 2 -structure on the leafs of the canonical foliation F. If we restrict to the class of Hopf manifolds inside the class of Vaisman manifolds, then we have a very explicit result about the G 2 -structure present in the leaves. Notice that the solution of section 5.1 is a Hopf manifold, so the following result applies [49,53].
Let us apply proposition 5.8 to the m = 8 case. Then the foliation F is by sevendimensional spheres S 7 . But a seven-dimensional sphere S 7 is equipped with a nearly parallel G 2 -structure φ ∈ Omega 3 (S 7 ), which satisfies: Let denote by τ 0 ∈ Ω 0 (S 7 ), τ 1 ∈ Ω 1 (S 7 ), τ 2 ∈ Ω 2 14 (S 7 ) and τ 2 ∈ Ω 3 27 (S 7 ) the torsion classes of the G 2 structure φ. Then, the G 2 -structure φ satisfies τ 2 = 0 and it is therefore a particular case of the general characterization found in references [16,17,36,37] for the most general eleven-dimensional Supergravity supersymmetric compactification background to three-dimensions. It is rewarding to see that although we are considering non-geometric compactification backgrounds, the foliation structure of the most general geometric supersymmetric compactification background is preserved, which also indirectly shows that compactifying in this class of non-geometric compactification background should be possible in principle. | 15,024.8 | 2015-09-01T00:00:00.000 | [
"Physics"
] |
The Role of Artificial Intelligence in Shaping the Future of Education: Opportunities and Challenges
: Artificial intelligence has become a booming technology whereas it brings numerous positive changes within the educational process. The aim of the research is to describe the role of artificial intelligence in education through the analysis of its opportunities and challenges. The study involved the integration of qualitative (interviews, focus groups, and classroom observations) and quantitative methods (survey and statistical analysis). All the research procedures were organized according to the ethical standards for data collection and analysis. Over 50 recent scientific works were selected to analyze the research problem from different perspectives and present its comprehensive overview. The study involved 56 participants representing instructors from different institutions of higher education in Ukraine. The inclusion criteria were based on subject specialization, institution type, curriculum accreditation, and experience with artificial intelligence technologies. It was found that the positive impacts of artificial intelligence include personalized and adaptive learning, automated administrative tasks, enhanced support, e-learning facilitation, inclusivity, data-driven decision making, gamification, increased engagement, behaviour and predictive analytics, improved assessment. The challenges concerned the data privacy, security, bias, lack of understanding, transparency, necessity for additional training. The findings showed that the implementation of artificial intelligence through personalized learning, predictive analytics, intelligent tutoring systems, content creation systems, Virtual Reality, automated administrative tasks, and chatbots can shape the educational process effectively in future and modernize the future specialists’ training. The research results can be used within the educational institutions to increase the awareness of using artificial intelligence tools.
Introduction
Artificial intelligence (AI) is considered of significant importance in education today, revolutionizing traditional teaching and learning methods.Its tools and adaptive algorithms offer more personalized experiences and enhance the efficiency of educational process.AI-powered technologies facilitate interactive learning environments and promotes inclusivity within the diverse educational environment.AI is defined as the ability of a digital machine to perform tasks associated with human intelligent.These tasks involve computer vision, machine learning, natural language processing, recognizing patterns, solving problems, and making decisions (Chiu et al., 2023).At present, AI has become a booming technology whereas it brings numerous positive changes including enhanced productivity (Gao & Feng, 2023), improved healthcare (Davenport & Kalakota, 2019), and preventive maintenance in manufacturing (Rojek et al., 2023).AI opens up for new opportunities such as improvement of logistics and public transport (Jevinger et al., 2023).Other AI applications involve natural disaster prediction or response (Jain et al., 2023), precision agriculture (Gardezi et al., 2023).Kaur et al. (2023) admit that AI-based tools influence cybersecurity since they help detect cyber threats, indicate deviations and malicious activity, or improve user's authentication.Additionally, AI is increasingly being used in human resources management to automate the recruitment process and analyse employees' performance (Bujold et al., 2023).
The potential applications of AI-driven technologies in education include personalized learning and automated assessment (Lin et al., 2023).According to Kamalov et al. (2023), AI is used to enhance teacher-student collaboration and create flexible learning environment.Also, it provides adaptive learning promoting a more customized learning experience (Gligorea et al., 2023), and AI-powered tutoring systems provide real-time assistance (Lin et al., 2023).Ruiz-Rojas, Acosta-Vargas, De-Moreta-Llovet, & Gonzalez-Rodriguez (2023) emphasize that AI facilitates virtual classrooms through attendance tracking, adaptive content delivery, and active engagement with the material.AI, using data analytics, helps make data-driven decisions, enhance teaching strategies, and optimize teaching resources (Rahmani et al., 2021).Besides, AI-based Learning Management Systems (LMS) are designed to streamline administrative tasks, create personalized learning paths, or provide instant feedback (Firat, 2023;George & Wooden, 2023).Tan & Cheah (2021) along with Yordanova (2020) found that AI has a significant impact on gamification as it makes educational games more personalized and engaging.
The special attention is paid towards using of Generative Pre-trained Transformer (ChatGPT) within the educational process.It has become a promising tool that promotes a students' active participation and cognitive advancement (Montenegro-Rueda et al., 2023).ChatGPT is used as a virtual tutor to provide students with instant assistance and explanations on a wide range of subjects (Memarian & Doleck, 2023a).ChatGPT helps students in solving problems, offering guidance on research, and providing explanations for challenging concepts (Fütterer et al., 2023).In addition, this language model is employed to create interactive learning experiences, practice and improve language skills (Kohnke et al., 2023).Montenegro-Rueda, Fernández-Cerero, Fernández-Batanero, & López-Meneses (2023) and Annuš (2023) explain the use of ChatGPT as an assistant for lesson planning and content creation.It's important to note that despite ChatGPT is a valuable educational tool, it can be used as a complement to traditional teaching and cannot replace a human (Memarian & Doleck, 2023a;Yu, 2024).
It is worth mentioning that AI-driven technologies played an important role in emergency education, especially in situations where traditional education systems are disrupted due to natural disasters, pandemics, war, or other crises.AI facilitates the implementation of remote learning platforms that deliver educational content to the students affected with emergencies (Bakhov et al., 2021).Kamruzzaman, Alanazi, Alruwaili, Alshammari, Elaiwat, Abu-Zanona, Innab, Mohammad Elzaghmouri, & Ahmed Alanazi (2023) describe positive affect of AI tools on providing personalized learning experiences which is particularly important in emergency education because students may have diverse learning requirements and different stress level.During the COVID-19 pandemic AI enabled access to education through virtual classrooms, educational apps, and content delivery mechanisms (Pantelimon et al., 2021).Danylchenko-Cherniak (2023) indicates that AI-based instruments contribute to building the "normal" educational process during the war in Ukraine.According to Chmyr & Bhinder (2023), AI is able to increase the efficiency of military training significantly.
At the same time, AI presents various challenges that need to be addressed for successful implementation.They include data privacy and security, bias and fairness, lack explainability (Nguyen et al., 2023).At number of findings concern ethical considerations that concern the use of AI in education, such as data collection, the use of personal information, and the potential impact on students' independent work (Akgun & Greenhow, 2022;Grubaugh et al., 2023).Foltynek, Bjelobaba, Glendinning, Khan, Santos, Pavletic, & Kravjar (2023) insist that ethical framework must be established to guide the introduction of AI technologies within the educational process.
In the Ukrainian context, the special attention must be paid towards the study of Yuskovych-Zhukovska, Poplavska, Diachenko, Mishenina, Topolnyk, & Gurevych (2022) who explained the main principles of AI application in education and outlined the emerging problems and opportunities.Spivakovsky, Omelchuk, Kobets, Valko, & Malchykova (2023) revealed the institutional policies towards AI in university learning, teaching, and research.The findings show that AI-based tools are widely used within the process of formation of professional competencies at the Ukrainian institutions of higher education (Baranovska et al., 2023;Nosenko et al., 2019).The introduction of AI instruments like ChatGPT to optimize teaching, learning and educational management was investigated by Stepanenko & Stupak (2023).
Therefore, it is necessary to admit that AI plays a significant role in shaping contemporary education by revolutionizing traditional teaching and learning methodologies.Understanding the opportunities of AI-driven technologies will facilitate harnessing its capabilities within the educational process, enhancement of teaching methodologies, and automating administrative tasks.Additionally, as AI becomes increasingly integrated into the educational system, it is necessary to identify its challenges in order to equip both teachers and students with the tools to navigate ethical considerations and ensure the responsible use of AI in shaping the future of education.
Research Problem
The findings show that AI is an important tool for the enhancement educational process through building positive teacher-student collaboration (Kamalov et al., 2023), personalized learning (Lin et al., 2023).Additionally, AI-based instruments facilitate virtual classrooms (Ruiz-Rojas et al., 2023) construct intelligent tutoring systems (Lin et al., 2023;Memarian & Doleck, 2023a).The use LMS ensure uninterrupted learning and students' engagement (Veluvali & Surisetti, 2022) as well as simplify administrative procedures (George & Wooden, 2023).According to Montenegro-Rueda, Fernández-Cerero, Fernández-Batanero, & López-Meneses (2023), AI is used as an assistant for creation of courseware and customize educational materials.At the same time, AI provides rapid and effective responses to educational challenges during crises (Kamruzzaman et al. 2023;Danylchenko-Cherniak, 2023).Given that education becomes increasingly digital, AI plays an important role in preparing future professionals for the dynamic challenges of work environment, fostering their critical thinking, problem-solving skills, and technological competencies.
The extensive use of AI-driven technologies helps make more informed decisions and streamline repetitive tasks but also it brings ethical, societal, and even philosophical concerns in the educational environment.Firstly, it deals with privacy issues because AI systems often process vast amounts of personal data (Nguyen et al., 2023).Also, AI-based algorithms used for decision-making require smooth functioning and transparency (Memarian & Doleck, 2023b;Zhao & Gómez Fariñas, 2023).Other concerns are related to a digital divide between AI-enhanced learning experiences and resource-limited classroom (Kamalov et al., 2023), assessment of students' emotions or behaviors (González-Calatayud et al., 2021;Leon et al., 2023).Grassini (2023) explained the potential long-term consequences of using AI in education.Therefore, the responsible implementation of AI demands the development of a legal framework (Chmyr & Bhinder, 2023).Currently, the Artificial Intelligence Act or AI Act, a proposed regulation on AI in the European Union (European Commission, 2021) has become the basis for the design of institution-specific AI regulations.
Since AI is a powerful tool for enhancing the educational process by offering personalized learning experiences, automating administrative tasks, and increasing the students' performance, it is of great importance to analyze vast amounts of data to understand the impact of AI in education for educators, researchers, and students.Harnessing the benefits of AI effectively, it is necessary to address the potential challenges and ensure that AI technologies are ethically deployed within the educational process.
Research Focus
The in-depth analysis of the role of AI in education is of critical importance since it offers a paradigm shift and modernization drive of the educational process.In such a way, understanding the opportunities of AI-driven technologies can lead to the creation of improved educational environment, facilitation of complicated administrative operations and utilization of resources more efficiently.Obviously, AI enables the educational institutions to continually enhance their practices and introduce innovative strategies to meet the changing needs of a rapidly evolving world.Besides, the examination of the role of AI in education is not complete unless the challenges of AI implementation in education have taken into consideration.The results may be used for the development of applicable recommendations of introduction of AI tools for teaching, learning and research in order to legalize the interaction between AI, instructors, students, and administrators.
Research Aim and Research Questions
The aim of the research is to describe the impacts of AI in shaping the educational process through the analysis of its opportunities and challenges.
In doing so, the research will aim to answer the following questions: 1) How does AI-driven technologies impact education?What are the advantages of AI in shaping the educational process?2) What challenges related to the use of AI-driven technologies do the participants of educational process face? 3) What are future possible applications of AI in education?
General Background
The study the role of AI was conducted on the principles of flexibility, validity and reliability, and practical significance that are being placed on the research of this type.Also, the study suggests the integration of qualitative (interviews, focus groups, and classroom observations) and quantitative methods (survey and statistical analysis) that is called Q methodology.The combination of qualitative and quantitative techniques allows to gain a comprehensive understanding the opportunities and challenges of using AI-driven technologies.The mixed methodology had a number of advantages address comparing the findings from the qualitative and quantitative components and, therefore, provide a holistic perspective of the research problem.In additions, the research procedures were organized according to the ethical standards for data collection and analysis.
Sample / Participants / Group
The study involved 56 participants representing instructors from five different institutions of higher education in Ukraine.The respondents participated anonymously and voluntarily according to the ethical guidelines for education research to ensure their rights and confidentiality.Also, the guidelines dealt with the integrity of the research process and the responsibility for further dissemination of the findings.The inclusion criteria were based on the relevant characteristics such as subject specialization (Engineering, Information Technologies, Mathematics, Economics, Sociology, Foreign Language, Communication and Media Studies, Health, Law, and Education), institution type (higher education), curriculum accreditation (accredited by the State Service of Quality of Education of Ukraine), and experience with AI technologies (at least one academic year).it is worth stating that random sampling techniques were employed to enhance the generalizability of findings and reduce biases.Table 1 shows the profile characteristics of the participants.Source: based on the author's development.
Instrument and Procedures
The survey was carried out via an online questionnaire, consisting of both closed-ended and openended questions.The initial questionnaire was developed on the basis on the findings of the literature review on the role of AI-driven technology in education.To ensure the relevance of the survey, the pilot study was organized prior to the formal data collection that enabled to modify the questions and arrange the feedback procedures.The final questionnaire included 32 items ranging from "Completely agree" to "Do not agree".Also, it included 10 open-ended questions to collect the participants' additional attitudes towards AI applications in the institutions of higher education.
Classroom observations involved purposeful observation of using AI in the educational environment and evaluating its effect on teaching and learning.Before conducting observations, an observation framework with specific indicators for the role of AI in education was developed.The indicators were ranked from "Positive impact" to "Adverse effect" and addressed the ethical concerns of using AI in the classroom (e.g.privacy, security, bias, and transparency).During the observation phase, the observer systematically monitored the frequency of using AI-based tools in the classroom and their educational value including teacher-student interactions, the efficiency of instructional strategies used for material delivery, classroom management, students' engagement, the quality of instructional materials, and the objectivity of assessment.Following the observation, the post-observation conference was organized to share feedback among the participants, discuss the strengths and possible areas for improvement of AI applications in the educational process.The findings documented during the observations were included in the comprehensive research report.
Data Analysis
To analyse the data, the statistical analysis was used.It provided the summarizing the findings and presented a clear and concise overview of the collected survey data.The method involved creating frequency distributions to show how a trend occurs within the educational process.On the basis of the most common and less common responses it was possible to express the survey outcomes as percentages and to obtain the most interpretable representation of the data.The survey data was presented via charts or line graphs to convey the patterns clearly and make it more accessible to a target audience.The results of statistical analysis were used for further generation of conclusions.Moreover, Q methodology allowed to consider subjectivity of the research and to provide a specific understanding of the research problem. in the context of studying the role of AI in shaping future of education identified perspectives and their implications the shared viewpoints among participants were represented to assess possible opportunities of AI applications and challenges occurring within the educational process.
The positive impacts of AI-driven technologies on the educational process
During the survey the authors found that the positive impacts of AI-driven technologies of the educational process include personalized learning, automation of administrative tasks, adaptive learning, enhanced instructor's support, facilitation of e-learning, inclusivity, data-driven decision making, gamification, increased students' engagement, introduction of behaviour analytics, efficient assessment and predictive analytics.The application of questionnaire and classroom observations demonstrated that there is no significant difference in the data.Thus, personalized learning was admitted by the most participants as the advantages of implementation of AI within the educational process (41,2 % and 50,3% respectively).
At the same time, the biggest difference was demonstrated through automated processes and AIbased assessment.39,8 % of instructors said that automation enhances educational process and contribute to increasing its efficiency.But the positive use of automated platforms was noticed in 24,5 % of cases that proves it requires institution-specific adjustment and advanced training for instructors.Similarly, there was approximately 14 % difference between answers on the use of AI-based assessment tools.20,6 % of participants decided that assessment conducted with the use of AI instruments is effective.But much more effective use of assessment instruments was observed (34,5 %).This is due to objectivity of AI-based assessment, immediate feedback, customization of tasks, and their flexibility (It is possible to assess students' outcomes using various modalities such as text, images, audio, and video).Figure 1 shows the analysis of positive impacts of AI-driven technologies on the educational process based on questionnaire and classroom observations.
Challenges related to the use of AI in education
According to the survey conducted by the authors, the challenges facing the participants of the educational process include data privacy, security, bias, lack of understanding how to interpret AI outputs, transparency, necessity for additional training for instructors working with AI tools.Other challenges deal with resistance to change among the participants of the educational process, high costs of AI-based instruments, loss of human connection and risk of depersonalization.At the same time, the challenges are linked with legal and ethical considerations.Studying the integration of AI in education, the surveyors found that the most of instructors (78,4 % and 83,1 % respectively) consider data privacy and security as the biggest challenges.But during classroom observations it was concluded that the collection and analysis of student's data were adequately protected in more than half of these cases.Additionally, the authors proved that 76,5 % of respondents are worried about bias because if the data on educational process contains biases, AI may produce discriminatory results, affecting certain student groups unfairly.It is worth mentioning that 56,7 % of instructors considered that they lack the necessary training and professional development opportunities to effectively integrate AI into their teaching practices.In 45,8 % of cased it was found that instructors were able to work with AI tools effectively and turn the educational environment into effective one.The findings prove that addressing these challenges requires a collaborative effort involving both instructors and students to ensure that AI is implemented responsibly and is focus on positive learning outcomes.Figure 2 shows the analysis of challenges related to the use of AI in education based on questionnaire and classroom observations.The survey showed that most of participants insist on positive impact of AI-driven technologies.23,5 % of instructors agreed that AI is highly efficient tools and in future they will revolutionize the educational process significantly.31,4 % of participants stated that the use of AI within the educational process has average efficiency and enhance students' learning outcomes through the increasing of motivation, teacher-student collaboration, and improvements of assessment.According to them, the benefits in using AI-driven technologies exceed the negative impact their ethical and legal consideration.At the same time, it was revealed that 19,1 % of participants considered AI has a low efficiency and cause a number of challenges within the educational process.AI tools can be used sometimes or rarely as a complement to other teaching technologies.And 11,4 % of instructors are not aware of positive impact of AI instruments.They are sure that the use of AI-based techniques makes the educational process complicated and overloading.It testifies about the necessity of additional training for pedagogical staff oriented towards the explanation of AI possibilities in education.Figure 3 shows the efficiency of using AI technologies in education according to the survey participants.
Figure 3. Efficiency of using AI technologies in education
Source: based on the author's development.
Studying the possible AI application in education it was revealed that in most cases AI-based tools can be used to implement personalized learning paths (54,2 %), predictive analytics (51,6 %), intelligent tutoring systems (46,7 %), and automated content creation (46,2 %).Virtual Reality (VR) (43,8 %), tools for automated administrative tasks (42,7 %), educational chatbots (41,7 %) have great potentials to enhance the educational process in future since they optimize the routine tasks and make the educational environment more comfortable.At the same time, the use of robotics (18,3 %) and blockchain technology (17,5 %) were found as the most complex and least understood among instructors because they can be used only at the specialized departments where future Information Technology or Engineering professional are trained.development of interdisciplinary courses.Admission and enrollment processes verification of the authenticity of documents; identity verification; creation of the system of automated notifications; automated admissions decision-making.
28,7
Gamified learning platforms tracking user's engagement; incorporation of game elements; personalization of the learning experience.
39,9
Source: based on the author's development.
According to respondents' answers, AI is to revolutionize education further by introducing innovative applications incorporating personalized and adaptive learning, virtual tutors, intelligent assessment tools.The survey results prove that AI has the possibilities to improve administrative tasks through automation.As AI-driven technologies evolve, they contribute to students' collaboration, development of their creativity and critical skills, increase the inclusivity of the educational process.Thus, these potential applications of AI in education demonstrate the transformative impact of the technology on teaching and learning.The findings show that it is essential to apply these advancements with careful consideration of ethical and privacy implications to ensure responsible integration of AIbased tools within the educational environment.
Discussion
The research shows that, according to the respondents' answers, personalized learning, predictive analytics, intelligent tutoring systems, systems of automated content creation, VR, automation of administrative tasks, and educational chatbots have the greatest potentials.The implementation of these AI-powered tools will positively shape the educational process in future and enhance the efficiency of training of future specialists.It was found that these AI technologies are presently being used within the educational process but they are constantly evolving and showing more benefits when applied correctly and responsibly.To reveal the role of AI in shaping education, it is necessary to analyse these technologies in detail.1) Personalized learning paths.According to Jiang, Li, Yang, Kong, Cheng, Hao, & Lin (2022), personalized learning is a teaching strategy that tailors the students' learning style to their specific needs.It includes learning objectives, teaching methodology, and educational content that can vary depending on the requirements of students.Personalized learning paths can effectively integrate highquality learning resources, optimize the allocation of learning resources, and maximize their role (Ma et al., 2023).Shemshack & Spector (2020) admit that personalized learning is a complex activity approach that is generated from self-organization and self-assessment.The respondents found that personalized learning paths create the customized learning environment for students on the basis of their learning style and preferences.The educational content presented through personalized learning paths is often relevant and targeted to the students' interests, skill level, and learning objectives.At the same time, it was indicated that personalized learning paths improve assessment procedures significantly since they include real-time feedback.Choosing their learning paths students have a possibility to work interdependently an, therefore, they are getting ready to continuous professional development that is one of the requirements for future specialists in different fields.Moreover, using personalized learning paths increases motivation, engagement, and understanding of educational material.2) Predictive analytics deals with statistical algorithms analyzing historical data to make predictions about future outcomes.In the context of education, predictive analytics creates forecasts for the future by analyzing past trends in learning experiences (Sghir et al., 2023).In the modern educational environment, it is very important to predict and understand students' performance whereas it provides instructors with valuable information about learning patterns and allows students to recognize areas for further improvement (Syed Mustapha, 2023).During the last ten years, efficient and sophisticated predictive models developed with machine and deep learning enabled to discover complex hidden characteristics in learning data and enhances the efficiency of the educational process (Sghir et al., 2023).The findings showed that predictive analytics in education provides early interventions and instructors can identify students at risk of academic challenges.At the same time, predictive analytics contribute to the development of personalized learning paths significantly understanding students' learning style, strengths, and weaknesses.The respondents described the following AI applications that can be widely used in future within the educational process: curriculum design based on future professionals' needs and learning objectives, admissions and enrollment planning, graduation rate improvement through analysis of academic performance, financial concerns, or social aspects the students face.
3) Intelligent tutoring systems are integrated educational tools for customizing formal education using intelligent instruction or feedback (Guo et al., 2021).Singh, Gunjan, Mishra, Mishra, & Nawaz (2022) mention that the intelligent tutoring system is a technique that offers the students the exclusive educational materials developed on the basis of learning styles and preferred media of learning.According to Akyuz (2020), the use of intelligent tutorial systems makes the educational process dynamic and flexible because the students are longer engaged, able to study at their own pace, and are provided with professional assistance.The survey demonstrated that the intelligent tutoring systems positively affect helps students' addressing their learning styles and unique needs.A number of respondents confirmed that the intelligent tutoring systems provide real-time feedback and, as a result, help the instructors to adapt the teaching strategies and tasks based on their performance.Besides, the systems create continuous assessment and contribute to students' performance tracking.The educational environment built via the intelligent tutoring systems, enhances students' engagement and motivation as well.It was also found that the intelligent tutoring systems can incorporate gamified elements, simulation-based activities, and interactive exercises.This makes the students' learning experience more stimulating.4) Automated content creation is related to generation of educational materials without direct human intervention (Kamalov et al., 2023).This approach streamlines the content creation process significantly and increases its efficiency (Lee et al., 2023).Automated content is created through automatic question generation, text summarization or simplification, automated translation, or content recommendations according to students' preferences (Diwan et al., 2023;Ruiz-Rojas et al., 2023).On the basis of the survey results, it is obvious that automated content creation streamlines the process of generating educational materials, facilitates the creation of a large volume of educational materials, ensures the higher quality of educational content.Additionally, some respondents agreed that automated processes help to adhere to the defined standards and criteria, and develop the materials that are applicable to the students to learners with diverse needs, including those with limited learning abilities.Automated content creation facilitates the integration of innovative elements in the educational process including simulation-based activities, interactive multimedia, and games.5) VR is defined a computer-generated simulation of a real-life environment that can be interacted with using different devices like headset, gaming console, motion sensors, audio equipment, and VR software (Zhao et al., 2023).Serin (2020) admits that VR is an important innovation for future educational environment since VR applications the students to gain experiences that are dangerous or impossible for them to acquire in real life.Based on the findings of Santos Garduño et al. (2021), it is possible to state that using VR equipment with very realistic applications allowed students to have an immersive, interactive, and contextualized experience of the disciplinary contents.During the survey the instructors of the Ukrainian educational institutions enumerated a number of advantages related to the use of VR in education.They include: creation of realistic environments, providing simulations and virtual experiences according to real-world scenarios, increasing students' engagement and motivation, support of collaborative learning, and facilitation of e-learning especially when the traditional educational process is disrupted due to emergency situations.6) Automating of administrative tasks.Currently, AI-powered systems can handle admissions, enrollment, and course scheduling resulted in reducing the workload for administrative staff (George & Wooden, 2023).Administrative tasks are automated using personalized assistance platforms or chatbots (Parycek et al., 2023).The automating of administrative tasks is oriented towards boosting administrative efficiency, handling admissions, managing students' applications, organizing appointments, and collecting students' feedback (Ahmad et al., 2022;George & Wooden, 2023).The respondents found the automated administrative tasks lead to time efficiency, minimization of the risk of human errors, costs savings, and data management.The educational process benefit from the automating of administrative tasks since assessment instruments and attendance tracking systems are introduced.In addition, automated administrative tasks are related to enhancement of security measures by controlling access to students' sensitive information.
7) Educational chatbots.AI-powered chatbots are designed to follow people's conversations using text or voice interaction, providing information in a conversational manner (Labadze et al., 2023).The recent findings show that generative AI tools such as educational chatbots have the great potential to facilitate self-regulated learning (Chang et al., 2023).According to Wu and Yu (2024) AI-based chatbots had a greater effect on students in higher education, compared to those in primary education and secondary education.The survey participants showed that AI-based chatbots deliver immediate responses to inquiries, providing instant support for common questions related to courses, schedules, or resources.Also, educational chatbots offer personalized assistance and, therefore, facilitate the educational process.The findings proved that educational chatbots engage the students through interactive and conversational interfaces, increase their motivation to learning activities.But a number of respondents admitted that the positive use of educational chatbots require additional training and application of clear communication algorithms.
At the same time, according to the survey results, the participants face a number of challenges related to the use of AI in educational process.They include data privacy, security, bias, lack of understanding, transparency, necessity for instructors' additional training, high costs of AI-based instruments, and risk of depersonalization.It was concluded that addressing these challenges requires the elaboration of applicable recommendations on the use of AI within the educational institutions to gain the most benefits.
Conclusions
The research on the role of AI in shaping education resulted in the following conclusions: The survey findings demonstrated that the role of AI in education has increased in recent years and most of the instructors agree that the use of AI-driven technologies positively affects the educational process.It was found that the potential benefits of AI include introduction of personalized learning and customization of the educational content, automation of administrative tasks and facilitation of routine processes, providing students with educational and psychological support, creation of high-quality teaching recourses and modernization of curriculum.Also, AI tools bring data-driven decision-making, create innovative learning environment through VR, simulation-based technologies, or interactive educational games.AI helps students prepare for future professional activities and contribute to formation of their professional competencies (critical thinking, creativity, communication skills, digital literacy, adaptability and flexibility, readiness to innovative activity, and ability to continuous learning in particular).
Also, the findings show that currently AI tools are used to training future professional from different specialists including Science (Engineering, Information Technologies, Mathematics, Economics) and the Humanities (Foreign Language, Communication and Media Studies, Law, and Education).In addition, AI is widely used to teach interdisciplinary specialities like Sociology and Health.Obviously, AI has the great potentials to enhance the training of future professionals in science across various disciplines.According to the respondents, AI-driven tools help to analyze and interpret big amount of data, create predictive models in scientific research based on various variables, solve repetitive tasks in the laboratory, simulate a risk-free environment for students to conduct experiments and explore scientific concepts.In the process of training of future specialists in the Humanities, AI may contribute to text analysis, literature review, translation and cross-cultural studies.Alsos, it was found that AI-powered instruments may help researchers identify various trends in the Humanities, tailor the educational materials to individual students' learning styles and preferences in the Humanities subjects, and facilitates collaboration.The special attention was paid towards the use of VR and simulation-based technologies in training future health professionals since they create real-life healthcare scenarios, allows students to practice their clinical skills and surgical procedures, and enable to apply theoretical knowledge in a risk-free environment Since AI becomes increasingly integrated into the educational system, a number of challenges facing the participants of the educational process.According to the survey results they include data privacy, security, bias, lack of understanding how to interpret AI outputs, transparency, necessity for additional training for instructors working with AI tools.Other challenges deal with resistance to change among the participants of the educational process, high costs of AI-based instruments, loss of human connection and risk of depersonalization.At the same time, many challenges are linked with legal and ethical considerations and require the development of comprehensive framework on responsible use of AI-powered tools by the academic community for educational purposes as well as to conduct multifaced research.The study showed that personalized learning, predictive analytics, intelligent tutoring systems, systems of automated content creation, VR, automation of administrative tasks, and educational chatbots have the greatest potentials.The implementation of these AI-powered tools can shape the educational process effectively in future and modernize the training of future specialists.It was found that these AI technologies are presently being used within the educational process but they are constantly evolving and showing more benefits when applied correctly and responsibly.
The research results are to be introduced within the educational institutions to increase the awareness of using AI-based tools within the educational process.The findings will be useful for instructors working planning to work with AI-powered technologies and they can be implemented in the professional development curriculum for faculty staff oriented towards the extensive use of innovative technologies.Also, the ideas developed during the research can be applicable for improvement of educational programs within the Ukrainian educational institutions aimed at formation ©Copyright 2024 by the author(s) This work is licensed under a Creative Commons Attribution 4.0 International License.
of current professional competencies and training highly-skilled workforce for the country's economic growth.
Suggestions for Future Research
The research revealed the unique role of AI in enhancement educational process and the use AIpowered tools ensure uninterrupted learning and students' engagement, simplify administrative procedures, and provide rapid responses to educational challenges during crises.At the same time, it was found that the extensive use of AI technology brings ethical, societal, and even philosophical concerns in the educational environment.It is worth mentioning that the potential long-term consequences of using AI in education are not fully studied in the contemporary pedagogical literature.Definitely, future research must concern the potential long-term effect of AI-powered tools on the educational environment.
Moreover, in future the researchers must consider the development of approaches to responsible use of AI-powered tools within educational process and the design of comprehensive recommendations for the educational institutions.These recommendations must consider the AI potentials, algorithms for its using, and the methods to minimize the possible challenges.Further research must be oriented towards the institution-specific recommendations since learning objectives, infrastructure and technology readiness, students' population, curricula and educational programs may vary.
Also, the special attention of educational specialists must be paid towards exploration of the impact of AI on teacher professional development.Therefore, further research must concern the formation of instructor's AI competence and development of AI-specific methodology to cultivate the skills that are necessary for AI using.It is suggested creating the program for instructors' professional development related towards enhancement of digital literacy and formation of culture of innovation among pedagogical staff.
©Copyright 2024 by the author(s) This work is licensed under a Creative Commons Attribution 4.0 International License.
Figure 1 .
Figure 1.Positive impacts of AI-driven technologies on the educational process Source: based on the author's development.
Figure 2 .
Figure 2. Challenges related to the use of AI in education Source: based on the author's development.
©Copyright 2024 by the author(s) This work is licensed under a Creative Commons Attribution 4.0 International License.
Table 1 .
Participants' profiles Table 2 shows possible AI applications in the educational process.
©Copyright 2024 by the author(s) This work is licensed under a Creative Commons Attribution 4.0 International License.
Table 2 .
Possible AI applications in the educational process ©Copyright 2024 by the author(s) This work is licensed under a Creative Commons Attribution 4.0 International License. | 8,152.2 | 2024-02-13T00:00:00.000 | [
"Computer Science",
"Education"
] |
Remark on the energy-momentum tensor in the lattice formulation of 4D $\mathcal{N}=1$ SYM
In a recent paper, arXiv:1209.2473 \cite{Suzuki:2012gi}, we presented a possible definition of the energy-momentum tensor in the lattice formulation of the four-dimensional $\mathcal{N}=1$ supersymmetric Yang--Mills theory, that is conserved in the quantum continuum limit. In the present Letter, we propose a quite similar but somewhat different definition of the energy-momentum tensor (that is also conserved in the continuum limit) which is superior in several aspects: In the continuum limit, the origin of the energy automatically becomes consistent with the supersymmetry and the number of renormalization constants that require a (non-perturbative) determination is reduced to two from four, the number of renormalization constants appearing in the construction in Ref. \cite{Suzuki:2012gi}.
Introduction
Although the energy-momentum tensor is a very fundamental observable in field theory, it is not straightforward to define the energy-momentum tensor in the lattice field theory, because the spacetime lattice explicitly breaks translational and rotational symmetries. For four-dimensional lattice gauge theories containing fermions, a strategy to construct an energy-momentum tensor, that satisfies the conservation law in the quantum continuum limit, has been given in Ref. [2]. In quantum field theory, a symmetry is generally expressed by corresponding Ward-Takahashi (WT) relations and the conservation law is merely a special case of WT relations that holds only when the Noether current stays away from other operators. Nevertheless, as demonstrated in Ref. [2] (and probably as can be proven generally), any lattice energy-momentum tensor, that is conserved in the continuum limit, is expected to reproduce all WT relations associated with the translational invariance for elementary fields in the continuum limit. 1 This shows the fundamental importance of the conservation law in the continuum limit for a lattice energy-momentum tensor.
The present Letter is an extension of our recent paper [1] concerning the energy-momentum tensor in the lattice formulation of the four-dimensional N = 1 supersymmetric Yang-Mills theory (4D N = 1 SYM). In Ref. [1], we proposed a possible lattice energy-momentum tensor by mimicking the structure of the Ferrara-Zumino (FZ) supermultiplet [3]. That is, we defined a lattice energy-momentum tensor by a renormalized, modified supersymmetry (SUSY) transformation of a renormalized SUSY current on the lattice. Then, assuming the locality and the hypercubic symmetry of the lattice formulation and that the bare gluino mass is tuned so that the SUSY current is conserved [4,5], the energy-momentum tensor was shown to be conserved in the quantum continuum limit; as noted above, this is a minimal and fundamental requirement on the energy-momentum tensor. This lattice energy-momentum tensor can be a basic tool to compute physical quantities related to the energy-momentum tensor, such as the viscosity.
Although the general strategy to construct a conserved lattice energymomentum tensor in Ref. [2] is applicable also to the lattice formulation of 4D N = 1 SYM, our method that is based on the N = 1 SUSY in the target theory is interesting, because the direct imposition of the conservation law requires the (non-perturbative) determination of at least six renormalization constants [2], while the method in Ref. [1] contains only four (or three if one does not care about the ambiguity of the zero-point energy) unknown renormalization constants; see below.
In the present Letter, as a possible alternative of the definition in Ref.
[1], we propose a quite similar but somewhat different definition of a lattice energy-momentum tensor for 4D N = 1 SYM; this energy-momentum tensor is also conserved in the continuum limit. This new definition is superior in several aspects compared with the one in Ref. [1]: In the continuum limit, the origin of the energy automatically becomes consistent with SUSY and the number of renormalization constants that require a (non-perturbative) determination is reduced to two from four, the number of renormalization constants appearing in the construction in Ref. [1]. We follow the notational convention of Ref. [1]. 2
2.
A new definition of the energy-momentum tensor on the lattice As Ref. [1], our starting point for the construction of a lattice energymomentum tensor is a renormalized SUSY WT relation on the lattice: Throughout the present Letter, we assume that the composite operator denoted by O is gauge invariant and finite, i.e., it is already appropriately renormalized. In the left-hand of Eq. (2.1), S µ (x) is a renormalized Noether current associated with SUSY (the renormalized SUSY current),
2)
2 Vector indices µ, ν, . . . , run over 0, 1, 2, 3. ǫ µνρσ denotes the totally anti-symmetric tensor and ǫ 0123 = −1. All gamma matrices are hermitian and obey {γ µ , γ ν } = 2δ µν . We define γ 5 ≡ −γ 0 γ 1 γ 2 γ 3 and σ µν ≡ [γ µ , γ ν ]/2. The charge conjugation matrix C satisfies, The generator of the gauge group SU (N c ), T a , is normalized as tr(T a T b ) = (1/2)δ ab . g is the bare gauge coupling constant. x, y, z, . . . denote lattice points and a is the lattice spacing;μ is the unit vector in the µ-direction. U µ (x) ∈ SU (N c ) denotes the conventional link variable and where Z, Z S and Z T are renormalization constants 3 and lattice operators S µ (x) and T µ (x) are defined by Here and in what follows, [F µν ] L (x) denotes a lattice transcription of the field strength, defined from the clover plaquette P µν (x), In the right-hand side of Eq. (2.1), ∆ ξ is a modified SUSY transformation on lattice variables with the localized transformation parameter ξ(x), which depends on another renormalization constant Z EOM [1]; the localized transformations δ ξ and δ F ξ are defined by (ξ( Finally, E(x) in Eq. (2.1) is a dimension 11/2 operator that is given by a linear combination of renormalized operators with logarithmically divergent coefficients.
The derivation of the renormalized SUSY WT relation (2.1) is somewhat too lengthy to be reproduced here; we refer the interested reader to Ref. [1] and references cited therein, especially for the origin of various renormalization constants. Here, we simply note that Eq. (2.1) reduces to the conservation law of the renormalized SUSY current S µ (x) in the continuum limit, when the point x stays away from the support of the operator O by a finite physical distance (we express this situation by x supp(O)), This follows because in the right-hand side of Eq. (2.1), theξ(x) derivative vanishes and the dimension 11/2 operator E(x) does not produce an O(1/a) linear-divergence that could compensate the factor a when x supp(O). In deriving Eq. (2.1), we assumed that the the bare gluino mass M is tuned to the supersymmetric point [4,5,6,7,8] and that there is no exotic SUSY anomaly of the form of a three-fermion operator [7,8]. The relation (2.10) can be regarded as the restoration of SUSY (that is broken by the lattice regularization) in the continuum limit.
In Ref. [1], a symmetric energy-momentum tensor on the lattice was defined by, where D denotes the lattice Dirac operator and 4 and∆ ξ is a global modified SUSY transformation on lattice variables, that is obtained by setting the local parameter constant, ξ(x) → ξ, in Eq. (2.7).
c in Eq. (2.11) is a constant to be fixed, although it does not affect the conservation of T µν (x). Using the SUSY WT relation (2.1), it can be shown that the energy-momentum tensor (2.11) is conserved in the continuum limit [1]. The definition through Eqs. (2.11) and (2.12) was suggested by the structure of the FZ supermultiplet [3] that the SUSY transformation of the SUSY current is basically the energy-momentum tensor. Now, our new definition of a lattice energy-momentum tensor proceeds as follows: By using the renormalized SUSY current (2.2), we first define the quantity, where D x is a hypercubic region on the lattice that contains the SUSY current S µ (x) entirely; the point x is taken as the center of the region D x so that D x is invariant under the hypercubic rotation around x. Moreover, the size of the region D x must be "macroscopic", i.e., it must be finite in the physical unit. The definition of Θ µν (x; D x ) thus depends on the choice of the region D x as its argument indicates. From this Θ µν (x; D x ), we define a symmetric energy-momentum tensor on the lattice, simply by symmetrizing it with respect to the indices: (2.14) The idea behind the definition in Eqs. (2.13) and (2.14) is as follows: In the continuum theory, at least formally, the integral of the total divergence of the SUSY current in the continuum theoryS ρ (y), Dx d 4 y ∂ ρSρ (y), where the region D x contains an operator at the point x, generates the SUSY transformation, on the operator (δ ξ andδ ξ are localized and global SUSY transformations, respectively). In the classical continuum theory, on the other hand, the energy-momentum tensorT µν (x) is given by the SUSY transformation of the SUSY current [3] as (see Ref. [1]), where / D denotes the Dirac operator. Thus one sees that the definition (2.13) is a lattice transcription of the relation expected in the continuum theory, 5 In the classical continuum theory, the right-hand side of Eq. (2.18) is independent of the choice of the region D x because of the current conservation.
In the lattice theory, however, this property is lost because the conservation law of the SUSY current is broken by O(a) terms. That is, the dependence on D x in Eqs. (2.13) and (2.14) is an O(a) lattice artifact and the physics in the continuum limit should not depend on the choice of the region D x . 6 We note that the energy-momentum tensor (2.14) is manifestly finite, because the operator y∈Dx ∂ S ρ S ρ (y) in Eq. (2.13), being the sum of the total divergence, does not have any overlap with the operator S µ (x); Eq. (2.13) is thus the sum of products of renormalized operators at points separated by finite physical distances.
Let us show that the lattice energy-momentum tensor T µν (x; D x ) (2.14) is conserved in the continuum limit. For this, we first show the conservation of Θ µν (x; D x ) (2.13): Let 2R be the size of D x , (2.19) where L 4 denotes the whole lattice of the size L 4 , and define a three-dimensional cubic region orthogonal to the µ-direction as Then, from the definition (2.13) and the SUSY WT relation (2.1), we have Now noting that the combination y∈Dx ∂ S ρ S ρ (y) does not have any overlap with the point x, we see that Eq. (2.22) is the sum of correlation functions of renormalized operators with no mutual overlap with an overall factor of a (in front of the operator E(x)). Thus, Eq. (2.22) vanishes in the a → 0 limit and Θ µν (x; D x ) is conserved in the continuum limit: Next, we consider the anti-symmetric part of Θ µν (x; D x ), The conservation of A µν (x; D x ) can be shown by the same argument as in Ref. [1]: Assuming the hypercubic symmetry, it turns out that any dimension 4 anti-symmetric rank-2 tensor can be expressed as 7 where A 1 and A 2 are constants and the dimension 5 operator G µν (x) is at most logarithmically divergent. From this general form, we have This is trivially true for the first term in the right-hand side of Eq. (2.25). For the second term in the right-hand side of Eq. (2.25), this holds because of the equation of motion of the gluino field. Finally, for the last term of Eq. (2.25), this follows because of the overall factor of a. The combination of the above two properties, Eq. (2.23) and Eq. (2.26) implies the conservation law of the symmetric part of Θ µν (x; D x ), Eq. (2.14), that is This completes the proof of the conservation law of our lattice energy-momentum tensor (2.14).
For the new definition in Eqs. (2.13) and (2.14), we can further show that the expectation value of the energy density vanishes in the continuum limit, when periodic boundary conditions are imposed on all the fields. This property of the energy density operator is natural from the perspective of SUSY, because Eq. (2.28) corresponds to the derivative of the supersymmetric partition function (i.e., the Witten index [13]) with respect to the temporal size of the system. In other words, Eq. (2.28) shows that the origin of the energy that is consistent with SUSY is automatically chosen in the continuum limit; this is a virtue of the present definition of the energy-momentum tensor compared with our previous one [1]. 8 To show Eq. (2.28), we note that y∈L 4 ∂ S ρ S ρ (y) = 0 holds under the periodic boundary conditions. From this, where L 4 − D x denotes the complement of the region D x in the lattice L 4 and we have used the SUSY WT relation (2.1) in the second equality. Then since this is a correlation function of renormalized operators with no mutual overlap with an overall factor of a, this vanishes in the continuum limit, i.e., Eq. (2.28) holds. Our new definition in Eqs. (2.13) and (2.14) contains two unknown combinations of renormalization constants which must be determined non-perturbatively. One is the overall normalization of S µ (x), ZZ S and other is the ratio in S µ (x), Z T /Z S . See Eq. (2.2). Among these, the latter ratio Z T /Z S has been non-perturbatively measured in the process to find the SUSY point in nonperturbative lattice simulations using the Wilson fermion [9,10,11,12]. The former overall normalization ZZ S may be determined from the expectation value of the energy operator −a 3 x T 00 (x; D x ) in a certain reference (e.g., one-particle) state. Thus, the determination of unknown constants is much simpler than our previous construction in Ref. [1] that requires the determination of other two unknown constants, Z EOM in Eq. (2.7) and c in Eq. (2.11). This point can be a great advantage in practical applications.
On the other hand, the new definition has an O(a) ambiguity associated with the choice of the region D x in Eq. (2.13) and this ambiguity can be a possible source of the systematic error. Also, since the energy-momentum tensor is defined by the product of two SUSY currents at different points as Eq. (2.13), the application requires the computation of correlation functions with the number of arguments as twice as large compared with the correlation function of the energy-momentum tensor (e.g., one defined in Ref. [1]). Only an implementation of the present construction in actual numerical simulations will answer whether there is a real payoff or not.
We believe that the basic idea on the construction of a lattice energymomentum tensor in the present Letter (and in Ref. [1]) is applicable to more general 4D supersymmetric models. For our argument on the conservation law of the renormalized SUSY current in the continuum limit to hold, however, one has to carry out parameter fine tuning of sufficiently many numbers that ensures the SUSY WT relation (2.1). If such fine tuning is feasible for the model under consideration, our idea to construct a lattice energymomentum tensor from the SUSY current will be useful to study physical questions in supersymmetric models, such as the spontaneous SUSY breaking, the mass and the decay constant of the pseudo Nambu-Goldstone boson associated with the (classical) dilatation invariance and so on. | 3,677.8 | 2012-09-24T00:00:00.000 | [
"Physics"
] |
Structural studies of metastable and equilibrium vortex lattice domains in MgB2
The vortex lattice in MgB2 is characterized by the presence of long-lived metastable states, which arise from cooling or heating across the equilibrium phase boundaries. A return to the equilibrium configuration can be achieved by inducing vortex motion. Here we report on small-angle neutron scattering studies of MgB2, focusing on the structural properties of the vortex lattice as it is gradually driven from metastable to equilibrium states by an AC magnetic field. Measurements were performed using initial metastable states obtained either by cooling or heating across the equilibrium phase transition. In all cases, the longitudinal correlation length remains constant and comparable to the sample thickness. Correspondingly, the vortex lattice may be considered as a system of straight rods, where the formation and growth of equilibrium state domains only occurs in the two-dimensional plane perpendicular to the applied field direction. Spatially resolved raster scans of the sample were performed with apertures as small as 80 microns, corresponding to only 1.2*10^6 vortices for an applied field of 0.5 T. These revealed spatial variations in the metastable and equilibrium vortex lattice populations, but individual domains were not directly resolved. A statistical analysis of the data indicates an upper limit on the average domain size of approximately 50 microns.
Introduction
Vortices in type-II superconductors are of great interest, both from a fundamental perspective and as a limiting factor for applications where vortex motion leads to dissipation. Broadly speaking, vortex matter exhibits similarities with a wide range of other interesting physical systems including skyrmions [1,2], glasses [3,4], and soft matter systems such as liquid crystals, colloids, and granular materials [5]. Correspondingly, vortex matter presents a simple model system to examine important fundamental problems such as structure formation and transformation at the mesoscopic scale, metastable states (MSs), and non-equilibrium dynamics.
The presence of metastable non-equilibrium vortex lattice (VL) phases in superconducting MgB 2 is well established [6,7]. The equilibrium VL phase diagram for this material, shown in figure 1(a), displays three triangular configurations, denoted F, L and I, differing only in their orientation relative to the hexagonal crystalline axes [6,8]. In the F ( figure 1(b)) and I phases a single global orientational order is observed, with the VL nearest neighbor direction along the a* and a directions within the basal plane respectively. In the intermediate L phase (figure 1(c)), the VL rotates continuously from the a to the a* orientation, giving rise to two degenerate domain orientations. Cooling or heating across the F-L or L-I phase boundaries leaves the VL in a MS, as thermal excitations are insufficient to drive the system to equilibrium [6]. The metastability is not due to Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
pinning, but represents a novel kind of collective vortex behavior most likely due to the presence of VL domain boundaries [7].
Domain nucleation and growth governs the behavior of a wide range of physical systems, and it is natural to expect similarities between the VL and for example martensitic phase transitions [9], domain switching in ferroelectrics [10] or the skyrmion lattice where field/temperature history dependent metastability has also been reported in connection with structural transitions [11][12][13]. Recently, we have studied the MgB 2 VL kinetics as it is driven from the MS to the equilibrium state (ES) by an AC magnetic field [14,15]. This showed an activated behavior, where the AC field amplitude and cycle count correspond to an effective 'temperature' and 'time' respectively. Moreover, the activation barrier was found to increase as the fraction of vortices in the MS is suppressed, leading to a slowing down of the nucleation and growth of ES VL domains.
In this paper, we present small-angle neutron scattering (SANS) studies of the structural properties of the MgB 2 VL throughout the transition from the MS to the ES. These complement the kinetic measurements discussed above. Experimental details are given in section 2, describing the two types of measurements used to study domain formation parallel and perpendicular to the applied field. Results of rocking curve measurements, to determine the VL longitudinal correlation length, and raster scans, which focus on the domain formation in the plane perpendicular to the applied field direction, are presented in section 3. The implications of our data is discussed in section 4, and a conclusion is given in section 5.
Experimental details
We used the same 200 μg single crystal of MgB 2 (T 38 K c = , H 3.1 T 0 c2 m = ) as in prior SANS studies [6,7]. The sample was grown using a high pressure cubic anvil technique that has been shown to produce good quality single crystals [16], and isotopically enriched 11 B was used to decrease neutron absorption. The crystal has a flat plate morphology, with an area of ∼1mm 2 figure 2(a) and a thickness of ∼75 μm estimated using the density of MgB 2 (2.6 g cm −3 ).
SANS measurements [17] were performed on the CG2 General Purpose SANS beam line at the High Flux Isotope Reactor at Oak Ridge National Laboratory [18], and the D33 beam line at Institut Laue-Langevin. The final data presented in this study was collected at D33 [19][20][21] but consistent results were found at both facilities. The incoming neutrons, with wavelength λ=0.7nm and wavelength spread Δ λ/λ=10%, were parallel to the applied magnetic field. Measurements were performed at either ∼2.5 K or 14.2 K with 0.5 T applied parallel to the crystal c axis using a horizontal field cryo-magnet.
Different experimental configurations were employed for rocking curves and raster scans. For the rocking curve measurements the tightest beam collimation allowed by the D33 instrument was used, with a 10 mm diameter source aperture and an effective sample aperture of 1 mm (crystal size) separated by 12.8 m. Combined with the effects of the wavelength spread and a VL scattering vector q=0.105 nm −1 corresponding to an applied field of 0.5 T, this yields a total experimental resolution of 0.042°FWHM for the rocking curve width [22].
For the raster scans (D33 only), individual 'pixels' were imaged by SANS one at a time and compiled to create a two-dimensional image of the sample, as shown in figure 2. Here, gadolinium sample apertures with diameters of 190 and 80 μm, and a larger source aperture of 20 mm were used. The azimuthal resolution of 4.7°FWHM, estimated from the width of the undiffracted beam on the detector, was sufficient to resolve the closely spaced MS and ES VL Bragg reflections on the detector [14]. Starting with the top-left corner, the cryo-magnet was translated horizontally to image an entire row of pixels, and then moved vertically to begin the next row.
Step sizes of 200 and 100 μm for the translations were chosen to match the aperture sizes.
All VL configurations studied by SANS were prepared using the same protocol: first, an equilibrium VL was obtained in the F phase (T>13.2 K) or the L phase (∼2.5 K) by performing a damped oscillation of the DC magnetic field with an initial amplitude of 50 mT around the final value of 0.5 T [6]. In superconductors with low pinning, this results in a well-ordered, equilibrium VL configuration [23]. Following the damped field oscillation, the ES VL was either cooled to 2.4 or 2.7 K across the F-L phase boundary to obtain a MS F phase ('supercooled') or warmed to 14.2 K to obtain a MS L phase ('superheated'), as indicated by the arrows in figure 1(a). The higher temperature is close to the F-L phase boundary, but has been verified to lie clearly within the F phase [15]. The VL relaxation is not thermal, and therefore not expected to depend on the exact oscillation temperature or the cooling rate [14,15].
To gradually evolve the VL from the MS to the ES phase, vortex motion was induced using a bespoke coil to apply a controlled number of AC field cycles parallel or perpendicular to the DC field used to create the VL. A sinusoidal wave function was used, with a frequency of 250 Hz and a peak-to-peak amplitude of 0.5 mT ). The AC field amplitudes are small compared to the damped DC field oscillation used to prepare the initial ES VL, which allowed for a precise preparation of the VL states used for the structural studies. No AC cycles were applied while the VL was imaged with SANS.
Rocking curve measurements
Diffraction from the VL occurs at scattering angles given by Bragg's law: . As a result of both lattice imperfections and the finite experimental resolution, reflections are broadened in reciprocal space, and scattering will occur for a range of angles around θ=θ 0 . Figure 3 shows the intensity as the VL is rotated through the Bragg condition in a typical rocking curve for MgB 2 .
Spatial correlations in the VL decay exponentially with distance, with a correlation length ζ L , resulting in a Lorentzian line shape in reciprocal space. In cases where the width of the Lorentzian and the instrumental resolution are comparable, rocking curves are best described by a Voigt profile: This is a convolution of a Lorentzian function (L) representing the intrinsic width of the VL Bragg peaks and a Gaussian (G) representing the resolution. The exact forms used for the Lorentzian and Gaussian functions are: Here I 0 is the total integrated intensity of the rocking curve, and w L and w G are the full widths half maximums (FWHMs) of the Lorentzian and Gaussian. The latter was kept constant and equal to the experimental resolution w G =0.042°for all fits. The rocking curve in figure 3 is almost entirely described by the Lorentzian, showing that the resolution is sufficient to allow a determination of the VL correlation along the field direction. As seen in the following, the relative orientation of the AC and DC fields does not affect the results. For each rocking curve the evolution of the VL towards the ES is describe by a 'transition coordinate'. In the supercooled case, where the transition to the ES is discontinuous, this coordinate is the remnant metastable volume fraction, f MS [14]. This is obtained from the intensity ratio of the Bragg peaks belonging to the MS F phase and the total scattered intensity from the VL, discussed in more detail in section 3.2. In the superheated case the MS VL domains rotating continuously towards the ES orientation [15]. Here the transition coordinate is defined as the azimuthal peak splitting, Δf, of the VL Bragg peaks on the detector, shown in the inset to figure 4(b). For both the supercooled and superheated cases, the transition coordinate decreases as the global MS state is approached.
In figure 4, only the fitted Voigt profiles are shown for clarity. Here, each curve is individually offset horizontally to account for small differences in the fitted Bragg center (θ 0 ). For the supercooled rocking curves in . While there are some fluctuations in the data, the widths of all of the curves in figure 4 are essentially indistinguishable, irrespective of preparation (superheating/cooling), the value of transition coordinate, or the AC field orientation. The slight reduction in the scattered intensity observed for the 13 mT AC field amplitude may be due to a VL disordering in the plane perpendicular to the field direction. Figure 5 shows the fitted Lorentzian widths w L as a function of the transition coordinate, with the experimental resolution given by the solid line for reference. Here, the MS to ES transition progresses from right to left. For each measurement sequence, the widths are constant within the precision of the fits throughout the transition. The average (dashed line) for the supercooled and superheated cases also agree within standard deviation for each data set (shaded area). This shows that regardless of the transition pathway, the VL experiences no longitudinal disordering. As previously described, the rocking curve fits are dominated by the Lorentzian contribution. Fitting the data with a Lorentzian instead of a Voigt profile yielded widths that were at most 10% larger than those in figure 5.
Raster scan measurements
Spatially resolved measurements of the VL were performed to investigate variations in the MS and ES domain populations in the plane perpendicular to the applied field direction. Due to the time consuming nature of these measurements, only a single VL configuration was investigated. Prior to the raster scans, a supercooled VL was prepared in the usual manner. The VL was then driven to a state with approximately equal intensity in each of the three domain orientations, by applying 600 AC cycles with an amplitude ) at the measurement temperature of 2.7 K. Figure 6 shows the azimuthal intensity distribution for the bulk system. The line in figure 6 is a fit to a threepeak Gaussian: Here I 0 is a constant accounting for isotropic background scattering, I j is the integrated intensity, w j is the FWHM, and j j is the center for the jth Bragg peak. The individual peak intensities (I j ) are proportional to the number of scatterers in the corresponding domain orientation. From the fit the metastable and equilibrium volume fraction can be determined by To improve the spatial resolution a second raster scan using an 80 μm aperture was performed on the central part of the sample. The data, visualized in the same manner as for the larger aperture scan, is shown in figure 9. Overall, the 80 and 190 μm aperture raster scans appear qualitatively similar. Again, pixels frequently contained a mix of domain orientations and the two ESs tended to occur in the same pixel. However, with the 80 μm aperture it is possible to discern more fine structure. For example, there are several blue-green pixels in the top right corner indicating the presence of one of the ES orientations with the metastable orientation. Similarly, towards the bottom of the scan, the pixels transform gradually from green to brown to purple. The inability to resolve individual VL domains is unsurprising, given that the illuminated sample area with an 80 μm aperture size and an applied field of 0.5 T contains ∼1.2×10 6 vortices. However, improving the resolution is not straight forward. The present studies are already approaching the limit of the D33 SANS instrument, both in terms of intensity/required count time and the precision with which it is possible to reliably translate the cryo-magnet horizontally and vertically.
Compared with the bulk measurements of the VL in figure 6, the I j ( )distributions for the individual pixels in the raster scan exhibit a greater variation in the Bragg peak centers. This is evident from the histograms of the fitted centers for each of the three domain orientations, shown in figure 10. Most of the Bragg peaks could be assigned a particular domain orientation (i.e. ES 1 , MS, or ES 2 ) based on relative angular proximity to other Bragg peaks in the pixel. For peaks that could not be inferred in this manner, the state was determined by which domain orientation angle the fitted center was closest to. For each of the three peaks, the average peak position relative to the bulk f 0 =245.8°and the associated standard deviations (190 μm: corresponds to an unstable equilibrium in the single domain free energy for the supercooled VL [15]. An equally large standard deviation is found for the ES L phase domain orientations, even though these correspond to a minimum in the free energy and, from a single domain perspective, are expected to be more consistently aligned. The L phase 'misalignment' must therefore be due to domain interactions which favor acertain relative orientation, given, for example, by he coincident site lattice theory [24].
Discussion
Based on the results described above, it is possible to infer several properties of the VL phases in MgB 2 and the transition from the MS to the ES.
The VL correlation along the field direction, ζ L , is inversely related to the Lorentzian rocking widths in figure 5. From the VL scattering vector q=0.105 nm −1 and the mean Loretzian width (converted to radians) for the supercooled (w L =0.119°±0.01°FWHM) and superheated (w L =0.111°±0.012°FWHM) cases, we find qw 2 10 m. 6 The correlation length is of the same order of magnitude as the crystal thickness ∼75 μm, highlighting the high degree of ordering observed for the MgB 2 VL. Importantly, we observe no broadening of the rocking curves and no difference between the supercooled and superheated case in figure 5, despite the different nature of the transition (discontinuous versus continuous). This implies that little or no fracturing of the VL occurs along vortex direction as the system is driven from the MS to the ES, and thus indicates that the nucleation and growth of ES state domains primarily takes place in the two-dimensional plane perpendicular to the applied field. In principle it is also possible to infer an in-plane correlation length from the width of the VL Bragg peaks in the plane of the detector. However, due to the two orders of magnitude poorer azimuthal resolution (see section 2) this yields ζ A =2/q w A ∼0.1 μm [6]. which should only be taken as a lower limit on the domain size. Using a transverse field scattering geometry, which could take advantage of the higher longitudinal resolution of the SANS instrument to probe the in-plane domain formation, is not practical due to the plate-like morphology of the MgB 2 single crystals.
An approximate upper limit on the in-plane domain size of 80 μm is obtained from the raster scan in figure 9, as almost every pixel contained scattering from more than one domain orientation. An estimate of the average ES domain size can also be obtained from a statistical analysis of the intensity ratio associated with the two equilibrium VL domain populations domains within a pixel. From the 26 pixels in the 80 μm aperture raster scan in figure 9 which were not purely in the MS phase we obtain a value of 0.105 [14]. This yields 160 domains for the entire 1mm 2 sample, corresponding to an average domain size ∼80 μm. The good agreement between these order of magnitude estimates leads us to conclude that VL domains in the plane perpendicular to the applied field is of the order several tens of microns. Finally, it was previously found that the scattered intensity stays relatively constant while the VL driven between metastable and ESs [14]. From this we infer that the VL domain boundaries occupy a relatively small part of the total sample volume, consistent with the analysis above.
There are similarities between the supercooled VL discussed above and structural martensitic phase transitions. Examples of the latter include the tetragonal-to-orthorhombic transition in cuprate superconductors [25,26] or the α-to-ò transition in iron [9,27]. In both cases, the system has two equal energy pathways from the initial to the final structure, and a final configuration consisting of a periodic twin-boundary lattice rather a single global domain which has the lowest energy. Here, the interface between the initial and twinned phases provides the force necessary to stabilize the metastable twin-boundary lattice, and the orientation of the interface depends sensitively on the relative populations of the two twinned phases [28]. In the case of the MgB 2 VL, the presence of a twin-boundary lattice could explain the spatial correlations between the two equilibrium domain orientations observed in the raster scans in figure 7 to figure 9. With the 190 μm aperture, of the 23 pixels that were not solely in the MS phase 16 (70%) contained both ES VL domain orientations. Similarly, for the 80 μm aperture scan, 16 of 26 pixels (62%) show scattering intensity associated with both ESs. No pixels purely in a ES 1 or ES 2 was observed, highlighting the preference for the ES domains to be in close proximity to each other or to a MS domain. Further studies that could provide real space information about the VL domain boundaries, either experimentally (e.g. by STM) or by non-equilibrium molecular dynamics simulations [29,30], would be a valuable complement to our SANS results and interpretation.
Conclusion
We have examined the structural properties of the VL in MgB 2 as it is driven between metastable and equilibrium configurations, using an AC magnetic field to induce vortex motion. Rocking curves show a lack of broadening, demonstrating that the VL does not fracture along the applied field direction in neither the supercooled nor the superheated case. Furthermore, the VL longitudinal correlation length is comparable to the sample thickness, and the VL can be considered a system of straight rods. Raster scans were performed to explore the formation and growth of ES domains in the two-dimensional plane perpendicular to the applied field. While it was not possible to resolve individual VL domains, a statistical analysis provided an estimate of the average domain size of approximately 80 μm. Finally, strong spatial correlations between the two equilibrium domain orientations is reminiscent of the twin-boundary lattice observed in connection with martensitic phase transitions. | 4,933.6 | 2019-02-22T00:00:00.000 | [
"Physics"
] |
Information accessibility, accounting manipulation, and sustainable development of digital enterprises: Based on double moderating effect model and panel PSM-DID method
A theoretical mechanism was analyzed from the micro perspective of the enterprise to explore how information accessibility moderates the effect of accounting manipulation on the sustainable development of digital enterprises. Using data from 1200 listing digital enterprises in China and the DEA-Malmquist index method, the efficiency value of digital enterprises in 2007–2021 was estimated to represent the index of sustainable development of digital enterprises. The accounting manipulation was detected using the panel PSM-DID method based on the Administrative Measures for the Recognition of High-tech Enterprise’s policy. The information accessibility value was estimated based on the MDA method. Empirical studies were conducted using text analysis, the panel PSM-DID method, and the double moderating effect model. The results showed that: (1) Accounting manipulation had a negative impact on the sustainable development of "true" digital enterprises and the "fake" digital enterprises. (2) Information accessibility directly and positively enhanced the technological progress and scale efficiency of digital enterprises, and its moderating effect was heterogeneous, with a significant moderating effect on the "true" digital enterprises and a negative effect on the "fake" ones.
Introduction
Facing a new downward pressure on the economy [1], it is imperative to accelerate the economic transformation and upgrading and foster new impetus for economic development by empowering of the digital economy (DE) [2]. Therefore, policies such as the 14th Five-Year Plan for Digital Economy Development of China intend to invest more into the weak fields of the DE to break through the bottlenecks of DE development and promote the integration of the DE with the real economy. Implementing preferential policies (PP) to support the growth of a young industry, including the digital industry, is both a deliberate strategy and a common practice [3,4]. However, policy preference (PP) induces accounting manipulation (AM), which undoubtedly affects the allocation of policy resources, and impedes the sustainable development of DE [5]. The rapid development of DE has significantly altered public administration and business operations. As a type of general application technology, taking the Internet as media, digital technology has been widely applied in public management [6], enabling remote and instant sharing of information, improving the accessibility of information for the authorities and the public, and subverting the traditional public management mode. Therefore, constructing and developing E-government facilitate the implementation of preferencial policies [7], limit the scope of AM, and weaken the impact of AM on the sustainable development of digital enterprises. In addition, using Internet-based information technology in enterprises improves the asymmetry of supply and demand information, the matching level between supply and demand, and production factor efficiency, thereby optimizing factor allocation efficiency. In other words, information accessibility (IA), which is improved because of DE development, is very likely to mitigate the negative impact of AM and optimize the sustainable development of DE.
However, several variables, such as the innovation and application stage of informatization, and the integration of the DE with real economy, still disrupt the overall impact of IA on AM. Currently, the digital technology is transitioning from the "installation" phase to the "development" phase [8]. An empirical question is how IA impacts the AM's effect on the sustainable development of digital enterprises. The existing studies on the AM were abundant, from the perspectives of input factors allocation [9], ownership heterogeneity [10], corporate social responsibility's effect on earnings management [11,12], and the AM's impact on the effect of PP [13], but the findings were controversial. In most cases, it was agreed that AM was detrimental to the sustainable development [14,15] due to the authorities and the public's limited identification ability and the information asymmetry. With the increasing integration of the DE and real economy, digital technology has empowered enterprise management and public management [16,17]. It has an impact on AM and its impact on the sustainable development of enterprises.
Based on the case of digital enterprises and the Administrative Measures for the Recognition of High-tech Enterprises Policy (hereinafter referred to as the "Administrative Measures" or "the PP"), by using the exogenous event that digital enterprises were recognized as hightech enterprises, this paper demonstrated: 1. The heterogeneous performance of "true" and "fake" digital enterprises' efficiency, who were without or with AM, after being recognized by high-tech enterprises, to determine the impact of AM on the sustainable development of digital enterprises.
2. The direct impact of IA on the sustainable development of digital enterprises and its moderating effect of IA on the AM's impact on the sustainable development of digital enterprises.
AM affects the PP's effect on the sustainable development of digital enterprises
The AM influences the distribution of the original policy dividends between the "true" and "fake" high-tech enterprises and impacts the effect on the sustainable development of "true" and "fake" digital enterprises heterogeneously.
AM's impact on the "fake" digital enterprises' efficiency. The effect mechanism is mainly through the following mechanisms: First, the AM transfers a part of the policy dividends from "true" to "fake" high-tech enterprises. The ambitious "fake" digital enterprises increase investment for higher profits, which increases R&D investment [18], the demand for R&D labor, and the production and sales volume. However, the innovation facility and capacity of "fake" high-tech digital enterprises are weaker than the "true," resulting in a weaker technological progress effect.
Second, AM modulates the signal effect of PP. Due to AM, "fake" enterprises obtain policy R&D funds, non-R&D-specificity credit funds and risk ventures as part of a growing industry [19]. Typically, "fake" enterprises are relatively smaller, and higher possibility and financing translate to an expansion of R&D and production scale, resulting in increased efficiency. However, as income is a key prerequisite for policy dividend granting, a rational choice is to expand revenue scale, rather than R&D activity [20,21]. The latter is of higher risk and uncertainty [22]. Therefore, the possibility of scale efficiency optimization of the "fake" digital enterprises is higher than that of technological efficiency and technological progress.
Third, the cost of AM diverts R&D investment, which hinders the optimization of technological progress and technological efficiency. The degree of the siphoning effect depends on the management decisions of the "fake" digital enterprises [23]. If R&D funds are misapplied to offset the cost of AM, it will hinder the technological progress of enterprises.
In conclusion, the impact of AM on the sustainable development of "fake" digital enterprises is extensive [24] and the net effect, which is determined by the balance of positive and negative effects, can be positive, negative or non-liner [25]. Therefore, the AM is very likely to affect enterprise efficiency, thereby influencing the sustainable development of digital enterprises. Therefore, the following hypothesis is proposed: Hypothesis 1 (H1): Under the premise of other conditions remaining unchanged, the AM has a positively impacts the sustainable development of the "fake" digital enterprises.
AM's impact on the "true" digital enterprises' efficiency. There are two sides to AM's impact on the sustainable development of "true" digital enterprises. One is that, with a larger number of policy beneficiaries, the policy dividend per capita of "true" digital enterprises is lower than before AM. The reduction of policy dividend weakens its original incentive effect on enterprises' self-own R&D funds and the signal effect on R&D talents and financing, which has a negative impact on scale efficiency and technological progress. Along with the intense competition from "fake" enterprises [26], scale efficiency and technological progress of "true" digital enterprises have become negative. In addition, policy dividends' multiplier effect and signal effect are conducive to maximizing efficiency. Therefore, the net efficiency effect of the AM on "true" digital enterprises' sustainable development is decided by the counteractions of the positive and negative effects. Therefore, the following hypothesis is proposed: Hypothesis 2 (H2): Under the premise that other conditions remain unchanged, the AM has a negative effect on the sustainable development of the "true" digital enterprises.
The IA's role in the AM's impact on the sustainable development of digital enterprises
The impact of the IA in the AM's effect on the sustainable development of PP is shown as follows: First, the IA is conducive to the sustainable development of digital enterprises directly. As a new production factor, digital technology modifies the coefficient of technology in the production function model [27], and the capital-labor substitution rate is likely being increased [28]. The substitution function of information technologies affects the IA's impact on the sustainable development of enterprises significantly. In addition, the Internet and information technologies optimize the allocation of factors due to the improved supply-anddemand information symmetry, eliminating economic bubbles [18]. However, the impact of IA varies with the innovation and application of information technology. Generally, the higher degree of informatization of enterprises, the stronger elasticity of labor substitution by digital technology [29], and the higher possibility of a positive effect of IA on enterprise efficiency.
Second, the IA facilitates the implementation of PP and mitigates the negative impact of AM on the sustainable development of enterprises. The IA improves the screening ability of the authorities and the accuracy of the selection of beneficiaries, upgrading the traditional monitoring methods [30]. Information, digital and intelligent monitoring technologies strengthen the supervision over the AM and implementation of policy dividends. Therefore, the IA ensures the R&D specialization of policy dividends, optimizes dividends' allocation efficiency, and positively affects the efficiency of enterprises.
Third, the IA has a negative effect on the effect of PP and the AM's impact on the sustainable development of enterprises. The innovation and application of digital technology require professional labor skills, adequate digital infrastructures, and smooth networks. The degree of similarity between the current state and the optimal state also affects the performance of IA [31]. Therefore, the following hypothesis is proposed:
Methodology, index selection, and data source
Model setting AM's influence on the efficiency effect of PP. Instrumental variable methods, difference methods, sensitive analysis, and RDD are the common methods of policy effect testing. The difference methods, which are the most similar to random experiments and considered quasinatural experiment designs, can be used to overcome the endogenous problem in parameter estimation. This study aimed to investigate the AM's impact on the sustainable development of enterprises and IA's moderating effect. The the recognition of high-tech enterprises was regarded as a quasi-natural experiment. PSM-DID, as proposed by Heckman et al. [32], was used to detect the "true" and "fake" digital enterprises. As reported by relevant studies [5,33,34], PSM was first used to match the control group and treatment group, and the control group that was most similar to the treatment group was selected to increase the comparability of the samples. Secondly, DID was used to analyze AM's impact on enterprises' sustainable development. The following model was set: Where, EFF was digital enterprise efficiency, including total factor productivity (TFP), scale efficiency (SE), technical efficiency (TE), and technological progress (TECH), representing the sustainability of digital enterprises, i was a digital enterprise, t was the year, βt was the fix-effect virtual variable. Virtual variable POST was whether the enterprise was identified as a high-tech enterprise. Virtual variable HiT was whether the enterprise had passed the high-tech enterprise identification. X was the control variable. a0 was the constant, and a1 and a2 were the coefficients of the corresponding variables. a2 was the impact of Administrative Measures on enterprise efficiency. When a2 was significantly positive, it meant the PP positively affected efficiency. When it was negative, it meant the PP had a negative effect on efficiency. Based on the model (1), set a model as follows: Where PsdHiT was AM. a4 was the impact of AM on the sustainable development of digital enterprises on "fake" digital enterprises. The difference of values of a2 in models (1) and (2) was the impact of AM on the sustainable development of "true" digital enterprises. When the value of a2 in model (2) was less than that in model (1), the AM had a negative effect. When the value of a2 in model (1) was higher, the AM had a positive effect.
IA's effect. To verify the direct effect of IA on the sustainable development of digital enterprises, a direct relationship model was set with enterprise efficiency as the dependent variable and enterprise IA (Inf) as the independent variable as follows: To verify the moderating effect of IA in the AM's effect on the sustainable development of digital enterprises, based on model (3), model (4) was set as follows: Where Inf*HiT*POST was a cross-term of passing high-tech identification (HiT*POST) and IA (Inf). From the coefficient a1 of Inf in model (3) To verify the moderating effect of IA in the impact of AM's effect, based on the idea of the Double Moderating Effect Model, model (5) was set by adding the AM variable (PsdHit*POST) to model (4), and based on model (5), set model (6) by adding the cross terms of AM variable (PsdHit*POST) with IA (Inf).
Where from a2 in model (5) and a4 in model (6), we had the moderating effect of the IA's impact on the sustainable development of "true" digital high-tech information. If the value of a4 in model (6) was positive, the IA strengthened the effect in model (5); if the value of a2 in model (6) was negative, the IA weakened the effect in model (5). From a3 in model (5) and a5 in model (6), we had the moderating effect of IA's impact on the sustainable development of "fake" digital enterprises. When a5 in model (6) was positive, the IA strengthened the effect of model (5). When a5 in model (6) was negative, the IA weakened the effect of model (5).
Variable definition and data source
Variable definition. The sustainable development of digital enterprise efficiency (EFF). There are mainly two measurement approaches for the development of DE. One evaluates the DE index by building a comprehensive evaluation index system, including informatization indicators and networking indicators, by adopting the entropy or index method. The other is productivity evaluation by building an evaluation index system, including input and output indicators, employing the DEA and Malmquist index methods [35,36]. The latter was adopted because the results by empoying it can show the degree of sustainable development. The DEA-Malmquist method was used to measure the relative efficiency of the enterprise [37,38], using labor and capital investement as inputs and profit and invisibale assets as outputs. The total payroll measured labor input. The total business cost of the enterprise measured by capital investment. The net profit represented the profit output. The net intangible assets represented the intangible assets.
AM (PsdHiT). According to Yang GH, et al. (2017) [5] and the prerequisites for policy dividend granting in the Administrative Measures when the sales revenue was less than 50 million yuan, and the proportion of the enterprise's R&D investment to the sales revenue was [6%, 7%), the value of PsdHiT was 1. When the sales revenue was more than 50 million yuan and less than 200 million yuan, and the proportion is [4%, 5%), the value of PsdHiT was 1. When the total revenue was higher than or equal to 200 million yuan, and the proportion was [3%, 4%), the value of PsdHiT was 1, and the rest was 0.
Whether the enterprise was recognized as a high-tech enterprise (HiT) and the virtual variable of the year after an enterprise was recognized as a high-tech enterprise (POST). According to the idea of the PSM-DID method, the value of the variables was decided as follows: When the enterprise was identified as a high-tech enterprise during the inspection, the HiT value was 1, or it was 0. After the enterprise was recognized as a high-tech enterprise, the value of POST was 1, or it was 0.
IA (Inf). IA refers to the information interaction ability or connectivity between individuals or regions [39]. With Internet technology, information is transmitted using digital coding [40], so the IA level is proportional to the level of its digitization level. Therefore, according to [41][42][43], the frequency of digital keywords was evaluated by adopting the MDA analysis of annual reports. A total of 184 terms like "AI technology", "block chain technology", "cloud computing tech", "big data technology", and "digital technology application" etc. were used as keywords. Then, the logarithm was taken as the value of IA. The frequency of keywords divided by the total number of words in the annual report was used as a surrogate variable for IA to ensure the robustness of the results.
Control variables. It included the equity ratio (ownershipp), asset-liability ratio (balance), equity concentration (con), whether the chairman and general manager are the same person (chairman), and financial leverage (lever). The equity ratio was measured by the total liabilities divided by shareholders' equity. The asset-liability ratio measures the ratio of total liabilities to total assets. Whether the chairman and general manager are the same person was 1 if the answer was yes or it was 0. The ratio of change rate of earnings per share of common stock to change rate of EBIT measured the financial leverage.
Data sources. Descriptive statistics for variables and data are shown in Table 1. The samples were selected from the enterprise list of the DE sector of the Shenzhen and Shanghai Stock Exchanges, excluding those listed or delisted in or after 2007, ST enterpreses, and ST* enterprises. The data of 1200 digital enterprises between 2007 to 2021 was from the Guotai'an database (www.gtarsc.com/) and the annual reports of listed companies in the Shenzhen and Shanghai Stock Exchanges. Based on normalized data, evaluation was done by using Stata 15.0.
Results and analysis
The AM's impact on the sustainable development of digital enterprise Two-way fixed-effect Muti-stage DID regression and PSM-Muti-stage DID model results are shown in Table 2. The results of both regressions were similar, indicating that the PSM-DID method was feasible. Table 3 shows the two-way fixed-effect PSM-DID regression results of model (2). The results showed that: (1) The coefficient between lnTFP and HiT was of the same direction as that of model (1), indicating that passing the recognition of high-tech enterprises is conducive to the optimization of efficiency, and the positive effect was significant because of the optimization of technological progress and scale efficiency. This confirmed hypothesis 1.
Muti-stage DID model regression
(2) The coefficient between lnTFP and PsdHiT was -0.0050, indicating that AM hindered the sustainable development of "true" high-tech enterprises, which confirmed hypothesis 2 of this paper. The negative effect of AM was mainly because of the negative effect on the SE and TECH of "true" digital enterprise. Table 4 shows the results of a direct efficiency effect of IA on enterprise (model 3). The coefficient between lnTFP and Inf was 1.47E-04, and it was significant. It confirmed hypothesis 3. The positive effect of IA on the efficiency of digital enterprises was from the effect on scale efficiency (the coefficient was 3.19E-05) and technical efficiency (the coefficient was 1.21E-04). In contrast, the negative effect on TE was not significant. The moderating effect of IA Table 5 shows the results of model (4). The coefficient between HiT*Inf and lnTFP was negative. The coeefficent between HiT*Inf and lnTE was significantly negative, but the coefficients between HiT*Inf and lnTECH and between HiT*Inf and lnSE were positive, indicating the IA deteriorated the positive effect of the high-tech enterprises recognition on the TFP of digital enterprises, i.e., IA had a negative moderating effect. The negative moderating effect was primarily from the negative impact on the technological efficiency of digital enterprises. In contrast, the positive intermediate effect on the scale efficiency and technology progress was not significant or sufficient. Table 6 shows the results of models (5) and (6), indicating the moderating effect of IA on the sustainable development of "fake" digital enterprises. Comparing the results of (1) with (5), Note: Standard error is included in parentheses. *, * * and * * * are significant at the levels of 10%, 5% and 1% respectively.
The direct efficiency effect of IA on enterprise
https://doi.org/10.1371/journal.pone.0283843.t005 Table 6. The moderating effect of IA on the sustainable development of "True" and "Fake" digital enterprises.
(1) (2) with (6), (3) with (7), and (4) with (8), the coefficient between PsdHiT*Inf and lnTFP was significantly negative, indicating that, as for the "fake" digital enterprises, the IA had a negative moderating effect on the positive effect on TFP, indicating that it weakened the positive effect on the efficiency of "fake" digital enterprises. It confirmed hypothesis 4, indicating that IA moderated the AM's impact on the sustainable development of digital enterprises. The negative moderating effect in the TE of digital enterprises was significant, while the others were not. The coefficients between PsdHiT*Inf and lnTECH and between PsdHiT*Inf and lnSE were significantly negative, implying that the negative intermediate effect of IA on the efficiency effect was due to the negative intermediate effects on the technology progress and scale efficiency. Test of the effect of PSM matching. To test the matching effect of the PSM method, we compared the density function diagrams before and after the PSM method matching, and found that the density function of the matched enterprises in the control group was comparable to that in the experimental group. We also compared the differences in enterprise characteristics between "true" and "fake" digital high-tech enterprises before and after the PSM method matching. It was found that there was no significant difference in enterprise characteristics between the two groups of samples after matching. In light of this, we concluded that the experimental and control groups matched by the PSM methods were highly comparable.
DID parallel trend hypothesis test
The validity of the evaluation results of the DID model depends on whether the "parallel trend hypothesis" can be satisfied. The results of the parallel trend hypothesis test of the experimental and control groups' samples showed that the only difference was that the experimental group passed the high-level identification while the control group did not. In addition, we used the dynamic regression model to test the impact of high-tech enterprise identification on enterprise efficiency. From the parallel trend diagram of DID model, it showed that the difference between TFP and its efficiency decomposition indexes SE, TE, and TECH of digital enterprises was narrowed after they were recognized as high-tech enterprises, showing that the DID model met the parallel trend assumption.
Placebo test
To rule out the possibility that the efficiency changes of enterprises after obtaining high-tech enterprise recognition were due to time trends, we conducted a placebo test by randomly selecting 100 digital enterprises and their recognition years. The PSM method matched these samples, thus obtaining the PSM sample results. The results showed that the empirical results were consistent with those of the PSM samples mentioned above, confirming that the time trend did not cause the PSM-DID regression results.
Discussion
The IA enables better accountability in public bureaucracies through E-governance initiatives, and understanding the IA has become essential to describing China's economic and social development in the new era [44]. For a long time, the AM issue is the key to addressing sustainable development.
Firstly, it is clear from the analysis of this study that it is passing the recognition of hightech enterprises while the AM hindered the sustainable development of digital enterprises. As for the "true" high-tech digital enterprises, the AM improved the scale efficiency, which could be because the "fake" and the "true" digital enterprises were collaborative. The growing "fake" digital enterprises supplemented, strengthened and extended the industrial chain and upgraded the industrial structure [26], forming a feedback effect on the sustainable development of "true" digital enterprises.
However, for the "fake" high-tech enterprise, being a high-tech enterprise is not conducive to optimizing their efficiency, particularly their scale efficiency or technological progress. Although the AM had a negative effect on the allocation of policy resources, the policy dividends were conducive to the expanding market scale and improving the market environment, which was also conducive to the optimizing "true" digital enterprises. However, the AM cost might have a siphon effect on the operating and R&D funds, causing a negative effect on "fake" enterprises.
Secondly, the IA was conducive to the sustainable development of digital enterprises. As enterprises have reached a certain level of informization, digitization and networking, the application of information technology in digital enterprises positively impacts the management of R&D and production planning. In addition, it was conducive to breaking the information bottleneck to the demand side and facilitate scale expansion, thereby optimizing scale efficiency.
Thirdly, the IA's moderating effect on high-tech enterprises was insufficient. Although IA should, theoretically, have a positive effect, which was conducive to alleviating the negative effect of AM on the sustainable development of digital enterprises, the positive impact of IA on output efficiency varied with the development of DE [38]. The DE was still developing then, and the integration of inform, intelligence, and digital technology with the real economy was low. Therefore, the effect of IA was not yet extinguished.
Finally, the IA's moderating effect on "fake" high-tech enterprises was significantly negative, indicating that IA strengthened the negative effect in "fake" enterprises, especially on the effect of the technological progress and the scale efficiency.
Conclusions
A theoretical mechanism was analyzed from the micro perspective of the enterprise to determine the impact of IA and AM on the sustainable development of digital enterprises. China's 1200 listed digital enterprises efficiency between 2007 and 2021 was estimated based on the DEA-Malmquist index model. The IA value was determined based on the big data crawler method. Based on the Administrative Measures, empirical analyses were conducted using models such as the panel PSM-DID method and the double moderating effect model. The following conclusions were obtained: First, the AM affected the sustainable development of digital enterprises heterogeneously. As for the "true" high-tech digital enterprises, the AM improved the scale efficiency, which could be of the feed back effect of AM from the "fake" ones to the the whole industry. The AM hindered the sustainable development of "fake" digital enterprises mainly because of the negative effect on their technology progress and scale efficiency. Second, the IA had a positive direct effect on the sustainable development of digital enterprises due to its effect on their technological efficiency and scale efficiency.
Third, the IA's moderating effect on the AM's effect on the sustainable development of digital enterprises was heterogeneous. IA generally did not improve the efficiency effect on "true" enterprises but significantly strengthened the negative effect of AM on the efficiency of "fake" digital enterprises, especially on the technology progress and scale efficiency.
We draw the following enlightenment: First, in addition to encouraging more R&D investments, we should strengthen and accelerate the construction of a "smart government", thereby strengthening the identification and supervision ability of the authorities for the positive effect of IA. Second, accelerate the cultivation and introduction of talents, optimize the human resource structure, release a positive effect on the TECH of digital enterprises, and further promote the TECH of enterprises. Third, cultivate the digital product and service market, break the supply-and-demand information bottleneck with the aid of digital technology, and direct enterprises to increase their market scale, thereby enhancing the SE and TFP of digital enterprises.
The main contributions of this research could be: first, this research focused on the opportunities brought by the digital economy because it improves information asymmetry and limits accounting manipulation behavior. Second, this research studied the AM's impact on "true" and "fake" digital enterprises' and the IA's direct and moderating effects rather than taking all the digital enterprises as a whole object, deepening the study and proposing targeted suggestions.
This study has limitations: first, due to the lack of official statistics on the digital economy, this study can only be based on the data of digital enterprises. Second, it fails to account for the more direct evidence of the feedback effect of the "fake" enterprises on the "true" ones. If it can be investigated, it will aid in overcoming the negative impact of accounting manipulation on the industrial efficiency effect of preferential policies and promote the sustainable growth of the digital industry. | 6,518.4 | 2023-03-31T00:00:00.000 | [
"Business",
"Economics",
"Computer Science"
] |
Research on Characteristics of Broadband Acoustic Sensor Based on Silicon-Based Grooved Microring Resonator
In order to meet the requirements of having a small structure, a wide frequency band, and high sensitivity for acoustic signal measurement, an acoustic sensor based on a silicon-based grooved microring resonator is proposed. In this paper, the effective refractive index method and the finite element method are used to analyze the optical characteristics of a grooved microring resonator, and the size of the sensor is optimized. The theoretical analysis results show that, when the bending radius reaches 10 μm, the theoretical quality factor is about 106, the sensitivity is 3.14 mV/Pa, and the 3 dB bandwidth is 430 MHz, which is three orders of magnitude larger based on the sensitivity of the silicon-based cascaded resonator acoustic sensor. The sensor exhibits high sensitivity and can be used in hydrophones. The small size of the sensor also shows its potential application in the field of array integration.
Introduction
Acoustic measurement is an important research means in non-destructive testing, underwater anti-submarine monitoring, health observation and biomedical imaging [1][2][3][4]. At present, traditional optical acoustic sensors based on fiber Fabry-Perot interferometer (FPI) [5,6], polymer fiber [7], or fiber Bragg grating (FBG) [8] have the advantages of high sensitivity and anti-electromagnetic interference compared with piezoelectric hydrophones, playing an important role in acoustic monitoring. However, most of these optical hydrophones rely on the amount of mechanical deformation to realize acoustic signal sensing and cannot avoid the limitations of self-resonance and narrow frequency.
In contrast, the acoustic signal testing technology based on optical microcavity, by virtue of excellent photoacoustic coupling characteristics, can avoid the limitation of selfresonance and narrow frequency and can expand the frequency range by 2-3 orders of magnitude and the sound wave detection frequency to MHz so that the scope of application can be greatly expanded and so that the device can be integrated into a chip, reaching the sub-centimeter-squared level, while ensuring its detection sensitivity. Sensors based on optical microcavities are often used in biochemical sensing [9], gas detection [10], refractive index sensing [11], and acoustic wave detection [12]. For acoustic wave detection, in order to achieve broadband and high-sensitivity acoustic signal detection, acoustic sensors combining highly sensitive and integrated microring resonators and low Young's modulus polymers have been proposed and improved. Reference [13] described an acoustic sensor based on a polymer microresonator with a quality factor of up to 10 5 and a frequency band response of up to 40 MHz. Reference [14] introduced an acoustic sensor based on a microring resonator, which realized 12 MHz sound wave detection, and its sensitivity reached 35 mV/KPa. Reference [15] analyzed a silicon-based acoustic wave sensor based on acoustic membrane resonance and realized acoustic wave sensing at 1-150 MHz. These Micromachines 2021, 12 studies show that the acoustic signal testing technology based on optical microcavity has great application potential in broadband acoustic monitoring. Therefore, this paper proposes a broadband acoustic sensor using silicon-based grooved microring resonators. Through theoretical analysis and simulation analysis, the photoacoustic coupling mechanism and acoustic sensing characteristics of the resonator are discussed, and the results show that it has good performance in acoustic sensing.
Principle of Microring Resonator
A diagram of the acoustic sensor structure based on the grooved microring resonator proposed in this paper is shown in Figure 1a. The grooved waveguide is composed of a straight strip waveguide and two ring silicon waveguides. The groove between the silicon waveguides and its upper cladding layer are filled with a polymer, and the silicon dioxide buffer layer is below the silicon waveguide. Figure 1c is a light field distribution diagram that satisfies the resonance state. The straight strip waveguide serves as the input and output ports of the resonator at the same time. When light is input from the input port of the straight waveguide, it is coupled with the curved waveguide of the groove microring resonator via the evanescent wave, and the lightwave that meets the resonance conditions causes resonance in the ring and propagates back and forth in the ring. The light that does not resonate couples with the straight waveguide through the gap. At this time, the normalized transfer function of the microring resonator, T, is expressed as follows [16]: wherein τ represents the transmission coefficient of the coupling zone, α represents the transmission loss factor of the microring, θ = 2πn e f f λ L represents the phase accumulated by the light propagating in the microring for a circle, λ represents the resonance wavelength, L represents the circumference of the microring, and n e f f represents the effective refractive index of the waveguide.
Design and Optimization of Structure
The curves of waveguides with different widths and the effective refractive index shown in Figure 2a. Figure 2a shows that, as the width of the waveguide increases effective refractive index gradually increases. The influence of the width of the groo When the optical path of the light beam traveling around the boundary of the geometric structure meets an integer multiple of the wavelength, interference enhancement occurs. The ring resonance equation can be expressed as follows: 2πRn e f f = mλ (2) wherein R represents the radius of the microring and m represents the number of resonance stages. The extinction ratio of the output spectrum of the microring resonator, ER, is expressed as follows: wherein it can be obtained that the extinction ratio of the microring resonator reaches positive infinity when |τ| = α. This condition is critical coupling. At this time, a perfect extinction ratio can be achieved, which is beneficial to the measurement of the spectral drift of the sensor. The quality factor of the microring is an important parameter to measure the microring resonator, expressed as follows: wherein n g = n e f f − λ dne f f dλ represents the group refractive index. From the expression, the quality factor is proportional to the radius of the ring.
Photoacoustic Coupling Effect
When the microring resonator is used in an acoustic sensor, the applied sound wave generates a stress field, and the stress can be calculated by the following equation: wherein v represents the Poisson's ratio of the material, E represents the Young's modulus of the material, and P represents the sound pressure generated by the acoustic signal. Under the influence of sound waves, the sound pressure of the sound waves changes the refractive index of the material of the microring resonator, thereby affecting the effective refractive index of the system. The relationship between the refractive index of the material in different directions and the magnitude of the stress can represent the elasto-optical effect [17]: wherein n x,y,z represents the refractive index of the material in different directions, n 0 is the refractive index of the material when there is no sound pressure, p 11 and p 12 are the elastic-optical coefficient of the material, and σ x,y,z represents the magnitude of the sound pressure in different directions. According to the optical waveguide theory, the equivalent refractive index method is adopted to obtain the effective refractive index of the grooved waveguide affected by the elasto-optical effect. The relationship between the refractive index change in the microring and the strain of the resonator is as follows [18]: When the external sound pressure changes, it causes the medium to interact with the evanescent wave. The effective refractive index of the waveguide changes due to the stress field imposed by the sound pressure change, which changes the resonance mode of the microring resonator and causes drifting of the resonance wavelength of the resonator. The Equation (2) demonstrates that, when the effective refractive index of the mode becomes larger, the resonance wavelength is red-shifted. By measuring the drift of the resonance wavelength, the voltage change can be demodulated to calculate the external sound pressure value.
For acoustic wave sensing, the higher the Q value of the resonator, the higher the sensitivity. The structure, width, and groove gap of the microring resonator all have an impact on the Q value. For acoustic sensors, in order to meet the needs of array integration, the smaller the size of the sensor, the better, so it is necessary to optimize the design of the geometric size of the microring resonator.
Design and Optimization of Structure
The curves of waveguides with different widths and the effective refractive index are shown in Figure 2a. Figure 2a shows that, as the width of the waveguide increases, the effective refractive index gradually increases. The influence of the width of the grooved waveguide and the gap between the grooves on the effective refractive index of the grooved waveguide is shown in Figure 2b. Figure 2b shows that, when the width of the grooved waveguide is constant, the width of the groove becomes larger and the effective refractive index becomes smaller; when the width of the groove is constant, the larger the waveguide width and the greater the effective refractive index. In order to obtain a smaller bending radius, the effective refractive index should be as large as possible, so that the bending loss will be relatively small [19]. Therefore, the waveguide should be as wide as possible, and the groove gap should be as narrow as possible. However, the groove gap should not be too narrow, and PMMA should be guaranteed to be spin-coated. In a word, the waveguide width should be 450 nm and the gap should be 100 nm. The waveguide width is 450 nm, and the optical mode distribution corresponding to different gaps is shown in Figure 2c.
In addition, the bending radius of the microring resonator has great influence on the effective refractive index and transmission loss of the waveguide. According to the relationship between bending loss and ring radius in Reference [20], the larger the bending radius, the greater the effective refractive index of the trench waveguide and the smaller the bending loss. Considering the bending loss, in order to obtain a larger Q value, the bending radius should be as large as possible. Therefore, the radius of the microring is selected to be 5 µm.
In order to obtain a better interaction between acoustic waves and light fields, acoustic wave sensors based on optical microcavities, generally, use polymers with a lower Young's modulus (GPa) as the encapsulation layer. In this study, PMMA was selected as the encapsulation material, and its Young's modulus was 3 GPa. PMMA has a larger elastic modulus and a smaller Young's modulus, which is more effective as a coating material. The thickness of the coating layer has an impact on the performance of the sensor, and the coating thickness of PMMA needs to be studied. Figure 3 is the relationship curve between different encapsulation thicknesses and the effective refractive index. After the analysis, the PMMA encapsulation thickness is selected as 0.45 µm. Since the thickness of Si is 220 nm, less than the thickness of PMMA, PMMA can also play a protective role. In addition, the bending radius of the microring resonator has great influence o effective refractive index and transmission loss of the waveguide. According t relationship between bending loss and ring radius in Reference [20], the large bending radius, the greater the effective refractive index of the trench waveguide an smaller the bending loss. Considering the bending loss, in order to obtain a larger Q the bending radius should be as large as possible. Therefore, the radius of the micr is selected to be 5 μm.
In order to obtain a better interaction between acoustic waves and light acoustic wave sensors based on optical microcavities, generally, use polymers w lower Young's modulus (GPa) as the encapsulation layer. In this study, PMMA selected as the encapsulation material, and its Young's modulus was 3 GPa. PMMA larger elastic modulus and a smaller Young's modulus, which is more effective coating material. The thickness of the coating layer has an impact on the performa the sensor, and the coating thickness of PMMA needs to be studied. Figure 3 relationship curve between different encapsulation thicknesses and the effective refr index. After the analysis, the PMMA encapsulation thickness is selected as 0.45 μm. the thickness of Si is 220 nm, less than the thickness of PMMA, PMMA can also p protective role. Therefore, according to the process and simulation results, the size parameters of the microring resonator are shown in Table 1. Therefore, according to the process and simulation results, the size parameters of the microring resonator are shown in Table 1. Figure 4a shows a deformation diagram of the sensor under the action of 1MPa sound pressure after the simulation analysis. Figure 4b is the relationship between the effective refractive index and the sound pressure calculated after comprehensively considering the effects of deformation and the elasto-optical effects on the effective refractive index of single-ring and grooved waveguides. After linear fitting, the relationship between the effective refractive index and the sound pressure of the grooved waveguide under different sound pressures is expressed as follows:
Analysis of Simulation Results
dn e f f dP = 4 × 10 −11 /Pa (8) wherein n e f f represents the effective refractive index and P is the sound pressure. The analysis shows that, due to the structural characteristics of the grooved waveguide, the optical field is localized in a small groove, and the refractive index change in the PMMA material caused by the elasto-optical effect is more obvious compared with the effective refractive index of the resonator deformation. Therefore, the relationship between sound pressure and effective refractive index is a positive correlation after linear fitting. In addition, the fitting relationship between the sound pressure and the effective refractive index of the grooved waveguide is larger than that of the single-ring waveguide, and the effective refractive index has a larger value and a larger slope. After the simulation analysis, the photoacoustic coupling model of the acoustic sensor is shown in Figure 5a. The input light wavelength is 1.55 μm, and the input light power is 1 mW. In this calculation of the acoustic wave pressure of the output optical power under different sound pressures, the color signal represents the applied acoustic signal, the green arrow represents the input light, and the red arrow represents the output light intensity. A waveguide resonant cavity with a bending radius of 5 μm is selected, with a resonant wavelength of 1561.08 nm as the reference, starting from the sound pressure of 1 MPa; the pressure adjustment step is 1 MPa. The spectrum response curve of the microcavity under different sound pressures is shown in Figure 5b. From the simulation analysis results, it can be concluded that, as the pressure increases, the resonance peak has a 40 pm/MPa linear red shift. According to the resonance Equation (2), the relationship between the resonance wavelength shift and the sound pressure is expressed as follows: After the simulation analysis, the photoacoustic coupling model of the acoustic sensor is shown in Figure 5a. The input light wavelength is 1.55 µm, and the input light power is 1 mW. In this calculation of the acoustic wave pressure of the output optical power under different sound pressures, the color signal represents the applied acoustic signal, the green arrow represents the input light, and the red arrow represents the output light intensity. A waveguide resonant cavity with a bending radius of 5 µm is selected, with a resonant wavelength of 1561.08 nm as the reference, starting from the sound pressure of 1 MPa; the pressure adjustment step is 1 MPa. The spectrum response curve of the microcavity under different sound pressures is shown in Figure 5b. From the simulation analysis results, it can be concluded that, as the pressure increases, the resonance peak has a 40 pm/MPa linear red shift. According to the resonance Equation (2), the relationship between the resonance wavelength shift and the sound pressure is expressed as follows: Through a numerical analysis, sensing is achieved by detecting the drift o resonant wavelength of the resonant spectrum. The requirements for the dete equipment are too high, and the existing equipment may not be able to meet the a test requirements. In addition to the method of detecting the wavelength shift, the intensity method can be used as a detection method for detecting the change in intensity at a specific wavelength for sensing detection of this sensor. The light inte method relies on the slope of the resonance spectrum curve near the detected wavele The greater the slope, the higher the sensitivity. According to the linear relatio between the output light intensity and the wavelength, the size of the sound pressure can be demodulated. Through a numerical analysis, sensing is achieved by detecting the drift of the resonant wavelength of the resonant spectrum. The requirements for the detection equipment are too high, and the existing equipment may not be able to meet the actual test requirements. In addition to the method of detecting the wavelength shift, the light intensity method can be used as a detection method for detecting the change in light intensity at a specific wavelength for sensing detection of this sensor. The light intensity method relies on the slope of the resonance spectrum curve near the detected wavelength. The greater the slope, the higher the sensitivity. According to the linear relationship between the output light intensity and the wavelength, the size of the sound wave pressure can be demodulated.
Frequency Response
In the acoustic signal detection based on the microring resonator, the frequency response of the acoustic sensor is mainly affected by the optical resonance in the microring resonator and the propagation of sound waves. Reference [21] shows that the main factor affecting the frequency response is the Fabry-Perot cavity effect of the sound wave, which be expressed as follows: wherein P I (k) represents the normalized frequency response, k represents the acoustic wave vector, l represents the thickness of the polymer material, T represents the pressure amplitude transmission coefficient, and R 0 and R 1 represent the amplitude reflection coefficient. Given that the thickness of PMMA polymer material is 2 µm, the acoustic impedance of PMMA is 3.2 × 10 6 Kg/(sm 2 ), the acoustic impedance of silicon dioxide is 1.31 × 10 7 Kg/(sm 2 ), and the acoustic impedance of seawater is 1.62 × 10 6 Kg/(sm 2 ). The frequency response can be calculated as shown in Figure 6. The 3 dB bandwidth of the sensor is 430 MHz.
amplitude transmission coefficient, and and represent the amplitude reflection coefficient. Given that the thickness of PMMA polymer material is 2 μm, the acoustic impedance of PMMA is 3.2 × 10 6 Kg/(sm 2 ), the acoustic impedance of silicon dioxide is 1.31 × 10 7 Kg/(sm 2 ), and the acoustic impedance of seawater is 1.62 × 10 6 Kg/(sm 2 ). The frequency response can be calculated as shown in Figure 6. The 3 dB bandwidth of the sensor is 430 MHz.
Sensitivity
The performance of acoustic sensors is mainly represented by sensitivity, which is defined as the ratio of transmission intensity to sound wave pressure. The sensitivity of the acoustic sensor is given by the following: wherein S is the sensitivity and T is the transmission intensity. For an acoustic wave sensor based on a groove microring resonator with a high Q value, the transmitted light intensity is related to the position of the resonance wavelength. Therefore, the resonant wavelength shift caused by the sound wave amplifies the change in the transmitted light intensity due to the steep resonant peak curve, thereby increasing its sensitivity. The sensitivity of the sensor is mainly determined by the Q value and the resonance wavelength. According to Equation (4), the bending radius is directly proportional to the quality factor. According to theoretical calculation, when the bending radius reaches 10 μm, the theoretical quality factor is about 10 6 . According to Equation (11), the sensitivity is 3.14 mV/Pa, which is three orders of magnitude larger based on the sensitivity of the
Sensitivity
The performance of acoustic sensors is mainly represented by sensitivity, which is defined as the ratio of transmission intensity to sound wave pressure. The sensitivity of the acoustic sensor is given by the following: ∝ Qn e f f dn e f f dP (11) wherein S is the sensitivity and T is the transmission intensity. For an acoustic wave sensor based on a groove microring resonator with a high Q value, the transmitted light intensity is related to the position of the resonance wavelength. Therefore, the resonant wavelength shift caused by the sound wave amplifies the change in the transmitted light intensity due to the steep resonant peak curve, thereby increasing its sensitivity. The sensitivity of the sensor is mainly determined by the Q value and the resonance wavelength. According to Equation (4), the bending radius is directly proportional to the quality factor. According to theoretical calculation, when the bending radius reaches 10 µm, the theoretical quality factor is about 10 6 . According to Equation (11), the sensitivity is 3.14 mV/Pa, which is three orders of magnitude larger based on the sensitivity of the silicon-based cascaded resonator acoustic sensor [10], indicating that the sensor proposed in this paper has the advantage of high sensitivity. The performance comparison between the sensor in this paper and the sensor previously studied is shown in Table 2. The comparison shows that the acoustic sensor designed in this research has the advantages of high sensitivity and a wide frequency band. Table 2. Sensitivity analysis of microring acoustic wave sensor.
Conclusions
This paper proposes an acoustic sensor based on a silicon-based grooved microring resonator. After theoretical analysis and research, it is proved that, when the bending radius reaches 10 µm, the theoretical quality factor is 10 6 , the sensitivity is 3.14 mV/Pa, and the 3 dB bandwidth is 430 MHz. The design and research of sensors with high sensitivity and broadband response have certain reference values and lay the foundation for realizing the application of acoustic sensing in the optical band of communication. In addition, the compatibility with photonic integrated circuit technology causes the sensor to have certain application potentials in array integration. | 5,157.6 | 2021-10-30T00:00:00.000 | [
"Physics"
] |
Cyclic Tension Induced Pattern Formation on [001] Single-Crystal Aluminum Foil
: Cyclic tension of (100)[001]-oriented single-crystal aluminum foils with the frequency 5 Hz forms a tweed pattern. Its period is several microns and increases by a factor of 1.5 in the temperature range 233–363 K. A model is proposed for structural relaxation of the medium on spatial and time meso- and macroscales under cyclic loading. Conditions under which a steady pattern forms are found based on the analysis of kinetic equations. The number of bands in the steady pattern is found to be related to the strain rate. The process activation energy is determined.
Introduction
Investigations into cyclic deformation of pure fcc metals play an important role in elucidating the fundamental laws of fatigue fracture initiation and propagation in metals [1][2][3]. These investigations proceed in the three interrelated directions: the cyclic stress-strain (CSS) curves, evolution of dislocation structures, and surface slip morphology [1][2][3]. The best-studied crystals are copper, nickel, and silver [1]. It can be noted, that in most cases, cyclic deformation is subjected to a qualitative study because the quantitative characterization poses methodological difficulties.
At present, the cyclic deformations of single crystals of silver, copper, and nickel with low, average, and high stacking fault energy are best understood. The CSS curves for the single slip-oriented crystals at the controlled plastic strain amplitude in the range ∆ε~(10 −4 -10 −2 ) have a plateau after the stage of primary cyclic hardening. This plateau corresponds to the formation of ladder-like Persistent Slip Bands (PSBs), whose fractions grow due to a decrease in the fraction of the surrounding matrix [1][2][3]. The observed regularities can be described quantitatively within the two-phase model [4]. PSBs on the crystal surface are the sites of extrusions and intrusions as well as fatigue crack initiation [2].
Cyclic deformation of Cu, Ni, and Ag crystals oriented for double and multiple slip at the controlled plastic strain amplitude is characterized by a higher primary hardening rate than that of the single slip-oriented crystals. As the plastic strain amplitude grows, the CSS curve can have either an extended plateau, a short plateau, or a continuous increase without a plateau and saturation depending on the single crystal orientation [5][6][7][8]. PSB ladder-like structures are usually parallel to active slip systems, but PSB ladders can correspond to secondary slip systems at higher plastic strain amplitude [9]. At γ pl > 2.5·10 −3 , type-I and type-II deformation bands appear on the specimen due to accommodation of the rotational deformation mode [8]. Therefore, crystals with double and multiple slip orientation have more diverse cyclic response curves, dislocation structures, and surface slip morphologies as compared to single slip-oriented crystals.
According to the study of the temperature effect on the cyclic deformation of differently oriented copper single crystals, a lower test temperature leads to an increase in saturation stress and a systematic decrease in the PSB wall period [10,11]. It was shown [2] that the saturation stress under cyclic deformation corresponds to the flow stress at the stage of parabolic hardening for copper single crystals. The investigation of the PSB cross sections and shapes of the corresponding extrusions and intrusions on the specimen surface reveals their dependence on the temperature [11,12].
The behavior of aluminum single crystals under cyclic deformation, whose stacking fault energy is the highest of fcc crystals, differs strongly from that of copper, nickel, and silver crystals [13][14][15][16]. Differences appear in CSS curves, dislocation ordering, and surface morphology. CSS curves for aluminum single crystals under room temperature cyclic deformation have no plateau at any controlled plastic strain amplitude [14]. It is replaced by the "primary cyclic hardening-softening-secondary cyclic hardening" sequence [14]. PSB ladders typical of copper, nickel, and silver are not observed on cyclically-deformed aluminum single crystals at room temperature, and dislocation ordering has mainly a cellular structure regardless of orientation [13,16].
The slip band pattern in aluminum has important differences from that in copper, nickel, and silver: PSBs do not propagate through the crystal cross section, but small PSB segments are clustered on the active slip plane [13].
Under cyclic deformation of aluminum single crystals with the [001] orientation, a specific structure is formed on the crystal surface, which appears as a regular network of fine lines in the optical microscope and is referred to as a tweed structure [14]. According to the detailed study [15], this structure presents spherical protrusions on the aluminum surface, which are formed by diagonal lines at an angle of 45 • to the loading axis and whose characteristics are independent of the controlled plastic strain amplitude and the number of cycles. The authors of [15] pointed to an important feature of the fracture of specimens: "Most of the runs had to be terminated due to a collapse of the crystals and corresponding break-down of the control over the plastic strain amplitude. Fatigue failure in the traditional meaning in which cracks are formed and one of these grows to a size that lead to the ultimate failure was not observed".
A similar structure was also observed on polycrystalline pure aluminum foils glued to flat specimens of high-strength aluminum alloy under cyclical deformation at room temperature [17] and 77 K [18]. The tweed structure can be observed in some bulk polycrystalline aluminum grains [19].
The dislocation structure of aluminum single crystals and polycrystals after cyclic deformation was studied in a number of works to establish a direct correlation between the tweed structure on the surface and the underlying dislocation structure [15,18]. According to [15], at the stage of cyclic softening and secondary cyclic hardening, when the cord and tweed structures form on foils cut parallel to the surface, there appear dislocation walls (001). In some areas they form simultaneously, resulting in a labyrinth-like structure. Similar wall structures, predominantly with the single set of {100} walls, were observed in polycrystalline aluminum foils after cyclic deformation at the temperature 77 K [18].
The formation of wall structures in fatigued FCC metals is fully explained by the (double pseudo-polygonization) model proposed in [20]. The model predicts the highest probability of (001) walls for [001] single crystals and the equal probabilities of (100) and (010) walls. In accordance with the model, the (001) walls can accommodate all eight slip systems activated under high-symmetry loading of a single crystal. The remaining {100} walls can accommodate only dipole loops in the four expected slip systems.
The study of foils cut from the inner regions of crystals [15] showed that, at the stage of secondary cyclic hardening, an ordered system of spherical dislocation subcells with a low density of dislocations in them is formed on the (001) dislocation walls. This indicates an increase in the dislocation density in the walls at the stage of secondary cyclic hardening and the formation of subcells at a critical level [15].
From the analysis it was found [16] that {100} dislocation wall structures observed on foils cut from different crystallographic sections correspond to a series of weak lines on the specimen surface, parallel and perpendicular to the tension axis. However, the correspondence between the dislocation structure of the walls and the tweed structure on the surface remains unclear [15].
According to [18], the tweed structure formation depends on the presence of a significant number of dislocations with four distinct Burgers vectors, which requires a tension axis close to [001]. To investigate the dislocation structure in relation to surface deformation features, one side of the foils was electropolished [18]. Based on the analysis of the investigation results, the tweed structure is suggested to form as a result of the extrusion process [18], in which the material surrounding the tweed structure protrusions is relatively soft. Thus, the analysis of the literature data on the relationship between the tweed structure on the aluminum crystal surface and the underlying dislocation structure shows that this issue remains open and requires further investigation.
The study of the temperature effect on the cyclic deformation of aluminum single crystals showed that a decrease in the test temperature to 77 K increases the saturation stress and results in a plateau in the CSS curves [14]. The dislocation structure formed at 77 K was similar to the two-phase structure formed in single slip-oriented copper under cyclic deformation, though no ladder-like structure was observed in this case [14].
The independence of the tweed structure period on the controlled plastic strain and the number of cycles [15] allows for a quantitative study of the temperature effect on the cyclic deformation of aluminum single crystals.
The aim of this work is to study the effect of temperature on the tweed structure formation during constrained cyclic tension of [001]-oriented single-crystal Al foils and to develop a theoretical model of macroscopic deformation and pattern formation.
Materials and Methods
As a substrate, we used specimens of grade D1 duralumin in the form of dumbbells with the gauge section 60 mm × 10 mm × 2 mm. Duralumin specimens were mechanically polished using pastes of different dispersion. Foils were prepared from plates measuring 16 mm × 20 mm × 0.5 mm, which were cut from an Al single crystal with orientation (100)[001] by the electroerosive method. Both sides of the plates were mechanically polished to a thickness of about 250-270 microns on polishing paper with a gradual decrease in the grain size of the abrasive to 3-5 microns, and then electrochemically polished in 74 mL N 2 SO 4 , 74 mL H 3 PO 4 , 16 g CrO 3 , 56 mL H 2 O electrolyte up to the thickness 200 µm. The foils were glued to the central part of the duralumin specimen surface using Loctite 480 glue.
Duralumin specimens were tested for low-cycle fatigue at the temperatures 233, 263, 296, 313, 333, and 363 K with the following parameters: frequency f = 5 Hz, σ max = 165 MPa, σ min = 0.1σ max , and σ mean = (σ max − σ min )/2. Loading was performed using a UTM150 servohydraulic testing machine (BISS (P) Ltd.) with a PAC-70-B-EUR-RRU-INT climatic chamber (CM Envirosystems (P) Ltd.). In the first 10 cycles at 1 Hz, the stress amplitude was elevated gradually from 0 to 165 MPa. Then, the frequency was brought to 5 Hz. The number of test cycles N = 5000 was the same for all specimens. After testing, the specimens were removed from the testing machine and the surface of single-crystal Al foils was examined through an Axiovert 25CA optical microscope.
Results
The surface of single-crystalline aluminum foils after cyclic tension demonstrates a tweed structure, which was earlier observed on foils [17] and bulk specimens [15] of (100)[001]-oriented aluminum single crystals at room temperature. This pattern appears on single-crystalline aluminum foils soon after the onset of cyclic deformation without visible slip lines, which conforms to the previous studies of cyclic deformation of bulk specimens and foils of aluminum single crystals [15,17]. The formation of a pattern on the foils is accompanied by the appearance of an interference shade, visible through the window of the climatic chamber [17]. The visible interference shade is used to estimate the formation time of the tweed structure t. The time varies from t 1 = (600 ± 60) s in cyclic tests at T 1 = 233 K to t 2 = (200 ± 30) s at the temperature T 2 = 363 K. Figure 1 exemplifies optical images of the tweed structure formed at the temperatures T = 233, 296, and 363 K. It can be seen that an increase in the test temperature leads to a noticeable increase in the tweed structure period. specimens and foils of aluminum single crystals [15,17]. The formation of a pattern on the foils is accompanied by the appearance of an interference shade, visible through the window of the climatic chamber [17]. The visible interference shade is used to estimate the formation time of the tweed structure t. The time varies from = (600 ± 60) s in cyclic tests at = 233 K to = (200 ± 30) s at the temperature T2 = 363 K. Figure 1 exemplifies optical images of the tweed structure formed at the temperatures T = 233, 296, and 363 K. It can be seen that an increase in the test temperature leads to a noticeable increase in the tweed structure period. To obtain quantitative information, optical images of the tweed structure are statistically processed and histograms are constructed for each test temperature. Figure 2 shows histograms for the temperatures 233, 296, and 363K. From the histograms, it can be seen that the average period of the tweed structure increases with increasing temperature. This indicates that the tweed structure formation during cyclic tension is a thermally-activated process. From the tweed structure period histograms, follow the average period (R) and the standard deviation ΔR for each test temperature. The average period of the tweed structure on the interval ΔТ = (233 ÷ 363) K increases from = (1.92 0.12) µm to = (3.03 0.15) µm, i.e., by almost 50%. Figure 3 plots the dependence of ln ( ) on 1/T, which approximates well to a straight line. To obtain quantitative information, optical images of the tweed structure are statistically processed and histograms are constructed for each test temperature. Figure 2 shows histograms for the temperatures 233, 296, and 363K. From the histograms, it can be seen that the average period of the tweed structure increases with increasing temperature. This indicates that the tweed structure formation during cyclic tension is a thermally-activated process. From the tweed structure period histograms, follow the average period (R) and the standard deviation ∆R for each test temperature. The average period of the tweed structure on the interval ∆T = (233 ÷ 363) K increases from R 1 = (1.92 ± 0.12) µm to R 2 = (3.03 ± 0.15) µm, i.e., by almost 50%. Figure 3 plots the dependence of ln R R 1 on 1/T, which approximates well to a straight line. This indicates that thermally-activated processes of structural relaxation determine the tweed structure formation on single-crystalline aluminum foils under cyclic tension. specimens and foils of aluminum single crystals [15,17]. The formation of a pattern on the foils is accompanied by the appearance of an interference shade, visible through the window of the climatic chamber [17]. The visible interference shade is used to estimate the formation time of the tweed structure t. The time varies from = (600 ± 60) s in cyclic tests at = 233 K to = (200 ± 30) s at the temperature T2 = 363 K. Figure 1 exemplifies optical images of the tweed structure formed at the temperatures T = 233, 296, and 363 K. It can be seen that an increase in the test temperature leads to a noticeable increase in the tweed structure period. To obtain quantitative information, optical images of the tweed structure are statistically processed and histograms are constructed for each test temperature. Figure 2 shows histograms for the temperatures 233, 296, and 363K. From the histograms, it can be seen that the average period of the tweed structure increases with increasing temperature. This indicates that the tweed structure formation during cyclic tension is a thermally-activated process. From the tweed structure period histograms, follow the average period (R) and the standard deviation ΔR for each test temperature. The average period of the tweed structure on the interval ΔТ = (233 ÷ 363) K increases from = (1.92 0.12) µm to = (3.03 0.15) µm, i.e., by almost 50%. Figure 3 plots the dependence of ln ( ) on 1/T, which approximates well to a straight line.
(a) (b) (с) This indicates that thermally-activated processes of structural relaxation determine the tweed structure formation on single-crystalline aluminum foils under cyclic tension.
Cyclic-Induced Pattern Formation Model
Let us now dwell on factors determining plastic deformation localization and conditions under which the periodic pattern forms. Dislocation mechanisms of plastic deformation of aluminum at the stage of parabolic hardening due to double cross slip of dislocations are known. However, dislocation mechanisms alone are inadequate to describe macroscopic deformation. There are two reasons for this. First, consideration should be given not only to dislocations, as stated above, but also to point defects. Second, structural relaxation of the system under plastic deformation is determined by the nucleation and interaction of deformation carriers on all spatial and time scales. However, not only direct solutions but also the writing of dynamic equations is hardly possible for deformation carriers. Other approaches are required to take into account the relaxation processes on large spatial and time scales. A phenomenological approach was proposed to solve the problem [21]. This approach can be used to study the regularities of nucleation and propagation of localized deformation bands in the form of traveling fronts at the stage of easy slip and linear hardening. In this work, the developed approach and the macroscopic deformation model are applied towards the solution of the pattern formation problem during the cyclic tension of foils of aluminum single crystals with cubic orientation.
We study the deformation of a flat specimen, which is cyclically extended to ( ) along axis x with frequency f under applied stress σ. The specimen plane lies in the plane z = 0 of the Cartesian coordinate system x,y,z. A medium under deformation is assumed to be homogeneous and isotropic. The one-dimensional case is considered.
Plastic deformation is a process of structural relaxation determined by the nucleation and motion of deformation carriers under an external force on all spatial and time scales ( is the scale number). By the scale, we mean the unstable-mode wavelength ~ with the frequency ~ of correlated displacements of inelastic deformation carriers.
Structural relaxation of a deformed medium on larger scales is determined by processes on smaller scales. Consideration is given to two spatial and time scales: < , < . In the experiments, we measure the displacement distribution related to the long-wave mode on the scale ~1 μm. The short-wave mode is determined by displacements on scales less than .
According to [21], structural changes in a deformed medium can be described by dynamic order parameters ( , ) and η( , ). These functions have the meaning of a volume fraction with structural changes leading to the excitation and development of two deformation modes on scales and , respectively. Local plastic strain in the linear approximation is written in the form
Cyclic-Induced Pattern Formation Model
Let us now dwell on factors determining plastic deformation localization and conditions under which the periodic pattern forms. Dislocation mechanisms of plastic deformation of aluminum at the stage of parabolic hardening due to double cross slip of dislocations are known. However, dislocation mechanisms alone are inadequate to describe macroscopic deformation. There are two reasons for this. First, consideration should be given not only to dislocations, as stated above, but also to point defects. Second, structural relaxation of the system under plastic deformation is determined by the nucleation and interaction of deformation carriers on all spatial and time scales. However, not only direct solutions but also the writing of dynamic equations is hardly possible for deformation carriers. Other approaches are required to take into account the relaxation processes on large spatial and time scales. A phenomenological approach was proposed to solve the problem [21]. This approach can be used to study the regularities of nucleation and propagation of localized deformation bands in the form of traveling fronts at the stage of easy slip and linear hardening. In this work, the developed approach and the macroscopic deformation model are applied towards the solution of the pattern formation problem during the cyclic tension of foils of aluminum single crystals with cubic orientation.
We study the deformation of a flat specimen, which is cyclically extended to ε(t) along axis x with frequency f under applied stress σ. The specimen plane lies in the plane z = 0 of the Cartesian coordinate system x,y,z. A medium under deformation is assumed to be homogeneous and isotropic. The one-dimensional case is considered.
Plastic deformation is a process of structural relaxation determined by the nucleation and motion of deformation carriers under an external force on all spatial l i and time t scales (i is the scale number). By the scale, we mean the unstable-mode wavelength λ i ∼ l i with the frequency ω i ∼ 1 t i of correlated displacements of inelastic deformation carriers. Structural relaxation of a deformed medium on larger scales is determined by processes on smaller scales. Consideration is given to two spatial and time scales: In the experiments, we measure the displacement distribution related to the long-wave mode on the scale l 2 ∼ 1 µm. The short-wave mode is determined by displacements on scales less than l 1 .
According to [21], structural changes in a deformed medium can be described by dynamic order parameters ϕ(x, t) and η(x, t). These functions have the meaning of a volume fraction with structural changes leading to the excitation and development of two deformation modes on scales l 1 and l 2 , respectively. Local plastic strain in the linear approximation is written in the form Here, ε S , ε L are the parameters determined by the mechanisms and conditions of the medium deformation on the scales l 1 , l 2 , respectively. Deformation modes at which appear in experiments. Here, angular brackets stand for averaging over the specimen length. At each time instant, deformation modes that decrease the elastic energy of the system are excited and develop. Kinetic equations for ϕ and η have the form [22] Here, α, g, q 2 , q 3 , b, p are the parameters determined by plastic deformation carriers. The parameter α depends on the applied load (elastic strain ε el ) and may alternate in signs, the rest of the parameters are positive. The quantity r = ε c −ε el ε c > 0 is the dimensionless threshold of stability of the medium at ϕ = 0. The deformation carrier nucleation (ϕ > 0) lowers the stability threshold of the system and initiates structural relaxation of the medium on larger scales. When the temperature rises, r for aluminum decreases.
By introducing the variables Equations (3) and (4) can be reduced to (the sign "∼" is further omitted) Here The system of two coupled nonlinear parabolic Equations (6) and (7) describes structural relaxation of a deformed medium on two spatial and time scales. The governing parameter in these equations is elastic strain. Let us explain the physical meaning of Equations (6) and (7). At ϕ = 0, Equation (6) has a unique stable homogeneous solution η 0 = η = 0, which describes elastic deformation of the medium. Plastic strain is determined by a change in the internal structure on a smaller scale, which occurs at ϕ > 0. The parameter d in (6) depends on the strain rate . ε of the specimen: d increases with an increase in . ε. Equation (7) describes the structural changes on smaller scales, which are determined by the nucleation and development of ensembles of interacting carriers of irreversible deformation. At η = 0, E (7) has two homogeneous stationary solutions ϕ 0 = ϕ = 0 . The solution ϕ 0 describes the elastically deformed state of the medium. The solution ϕ h describes the state of the medium with structural changes. It is stable at α > −2β 2 /9. With consideration for the stress dependence of the parameters α and β, the ratio −2 β 2 9 = α gives the threshold stress above which irreversible displacements are excited in the medium. At α > 0, the solution ϕ 0 is unstable to small heterogeneous perturbations. The pattern formation is preceded by a uniform deformation. Therefore, the parameter α > 0. The presence of the term −cηϕ on the right side of (7) means that mesoscopic plastic deformation is accompanied by an increase in the elastic energy of the system.
Equations (6) and (7) always have a homogeneous stationary solution η 0 = η = 0, ϕ 0 = ϕ = 0. Stationary homogeneous solutions ϕ h > 0, η h > 0 present the intersection points of the curves η = (−1 + dϕ) 1/2 (9) The analysis of the solutions of these equations and their stability to small perturbations shows the following. Stationary solutions ϕ h > 0, η h > 0 can be unstable to small heterogeneous perturbations at At d = d c , curve (9) intersects curve (10) at its maximum point ϕ = ϕ m = β/2. From (11), it can be seen that increasing c reduces d c and contributes to deformation localization. Instability to small heterogeneous perturbations develops when the inequalities are valid. Here With Equation (13), the third inequality in (12) reduces to the condition v η > v ϕ . The second inequality in (12) means that deformation carriers should be in short-range interaction with each other and have a shorter characteristic time of excitation. This explains the fact that structural relaxation at the stage of parabolic hardening is determined by double cross slip of dislocations and the generation of point defects. However, the physical aspect of its influence on macrodeformation localization is poorly understood within the theory of defects.
Discussion
Solutions to Equations (6) and (7) describing the periodic pattern formation are analyzed by numerical methods for the one-dimensional case. The initial and boundary conditions are given in Appendix A. The analysis of the numerical solutions shows that the pattern of spatial structures is almost completely determined by l 2 in (6) and l in (13). On higher l 2 at a constant value l, the spacing between localized deformation bands increases. As an example, Figures 4 and 5 show the spatial distributions of the dynamic order parameters at α = 0.03, β = 0.8, d = 10, l 1 /l 2 = 0.05, τ = 0.8 c = 0.4 (14) and on different l 2 . At the parameters used in (14), η h ≈ 0.26, ϕ h ≈ 0.11, d c ≈ 3.1. The distribution of the dynamic order parameters calculated for l 2 = 1.4 is shown in Figure 4. As indicated above, the spacing between localized deformation bands is almost completely determined by the value ~1/ / in (6). The higher is, the greater R is. This means that the r value should decrease as the temperature rises. The linear dependence ln ( )~-1/T ( Figure 3) means that ( / )~− 1/T, i.e., the dimensionless threshold of stability decreases with increasing temperature, which is in qualitative agreement with the experimental data.
Plastic deformation is a thermally-activated process. Let us express strain rates at the temperatures T , T as (T )~exp (− ), (T )~exp (− ), respectively. Here, E is the activation energy at the stage of uniform deformation (preceding the instability), and k is the Boltzmann constant. Under cyclic tension, the strain of the specimen remains constant. Then the equality holds. The activation energy found from (15) equals ≈ 0.07 eV. This value is lower than the migration activation enthalpy of interstitial atoms in aluminum ≈ 0.12 eV [22]. At the same time, E is higher than the activation energy of dislocation slip. It should be noted that no model underlies the estimate of the activation energy, but it stems from the analysis of kinetic equations of the formation of a stationary pattern and the relationship between the strain rate and the number of bands. The analysis of results based on the Einstein diffusion model [23] gave the migration activation energy of interstitial aluminum atoms ≈ 0.12 eV.
The results obtained show that the tweed structure formation on single-crystalline Al foils under cyclic tension in the temperature range ΔТ = (233 ÷ 363) K is controlled by the It can be seen that plastic deformation is nonuniformly distributed along the specimen length. Nine equally spaced striations are formed. In so doing, ϕ ≈ ϕ h , η ≈ 0.31 > η h . On l 2 = 2.1, six localized deformation bands are formed ( Figure 5), as in Figure 4, ϕ ≈ ϕ h , η ≈ 0.31 > η h . With increasing temperature, the parameter r decreases, and l 2 , according to (5), increases. Therefore, the band spacing increases with temperature, which agrees with the experiment.
As indicated above, the spacing between localized deformation bands is almost completely determined by the value l 2 ∼ 1/r 1/2 in (6). The higher l 2 is, the greater R is. This means that the r value should decrease as the temperature rises. The linear dependence ln R R 1 ∼ −1/T (Figure 3) means that n(r 1 /r 2 ) ∼ −1/T, i.e., the dimensionless threshold of stability decreases with increasing temperature, which is in qualitative agreement with the experimental data.
Plastic deformation is a thermally-activated process. Let us express strain rates at the temperatures T 1 , T 2 as .
, respectively. Here, E is the activation energy at the stage of uniform deformation (preceding the instability), and k is the Boltzmann constant. Under cyclic tension, the strain of the specimen remains constant. Then the equality t 1 .
holds. The activation energy found from (15) equals E ≈ 0.07 eV. This value is lower than the migration activation enthalpy of interstitial atoms in aluminum E i m ≈ 0.12 eV [22]. At the same time, E is higher than the activation energy of dislocation slip. It should be noted that no model underlies the estimate of the activation energy, but it stems from the analysis of kinetic equations of the formation of a stationary pattern and the relationship between the strain rate and the number of bands. The analysis of results based on the Einstein diffusion model [23] gave the migration activation energy of interstitial aluminum atoms E i m ≈ 0.12 eV.
The results obtained show that the tweed structure formation on single-crystalline Al foils under cyclic tension in the temperature range ∆T = (233 ÷ 363) K is controlled by the slowest process, i.e., migration of interstitial atoms. This is an unexpected result since a small number of interstitial atoms (as compared to vacancies) form in fcc metals due to the much higher energy of formation, and the probability of their annihilation is high. In addition, interstitial atoms have low migration activation energy and high mobility. Therefore, they are thought to be completely annealed at temperatures of several tens of Kelvin.
The above arguments are very weighty, and an increase in the role of interstitial atoms within point defect kinetics remains currently unexplained. The role of point defects in mass redistribution and surface relief formation during cyclic deformation and solidification of | 7,184.8 | 2021-12-24T00:00:00.000 | [
"Materials Science"
] |
The Effect of Using Website Games on Fourth Grade EFL Students' Learning of Vocabulary and Grammar in Jijin Secondary School in Jordan
The first and foremost purpose of this study is to investigate if Website Games can be used as a useful tool for teaching vocabulary and grammar to young learners. The importance of the present study is to explore the effectiveness of using Website Games on developing vocabulary and grammar by Jordanian young learners. Since learning vocabulary and grammar plays a significant role for having good language ability, boosting grammar repertoire and vocabulary growth would improve language ability and help the speaker to have a successful communication. The sample of the study consisted of 48 fourth grade students from Jijjin Secondary School in Irbid, Jordan, during the second semester of the scholastic year 2019/2020. They were already divided into two groups: Group one consisted of (23) students that was chosen as an experimental group and was taught vocabulary and grammar via Website Games technology by the researcher and group two consisted of (25) students, which was assigned as a control group and was taught through regular instruction by the teacher. Students in both groups sat for a vocabulary and grammar tests at the beginning of the first semester of the scholastic year so as to determine their actual knowledge before starting the experiment. Eight weeks later, they sat to a post-test. The results of the study indicated that the experimental group performed better than the control group in the acquisition of vocabulary and grammar due to the new strategy of teaching that is using Website Games. on Asian students’ English vocabulary learning. A meta-analysis was conducted to minimize heterogeneity between studies. The data were analyzed and visualized according to effect sizes. The results of this study presented that digital games played an important role in stimulating Asian students’ English vocabulary learning. This study examined factors that influenced students’ English vocabulary learning, including learning stage, game-aided teaching method, native language exposure, game platform, and game scenario. The results showed that native language background had the most noticeable impact, while negative transfer of Asian students' native language hindered the vocabulary learning. The effects of game platform, game-aided teaching method, and game scenario were not substantial. Simsek, Bilal; Direkci, Bekir (2019) conducted an interesting study entitled "The Investigation of the Relationship between Online Games and Acquisition of Turkish Vocabulary". This study examined if there is a relationship between online games and Turkish vocabulary acquisition. It used the sequential explanatory design of the mixed methods research. The quantitative data were gathered of 225 students studying in two secondary schools. The qualitative data were collected from 20 participants. While quantitative data was collected by using vocabulary knowledge assessment form, for the qualitative data, a semi-structured interview form was used. There was a weighty difference between the scores in vocabulary knowledge test and the experience in playing the game, duration of the game, and time spent in watching broadcasts. There wasn't any significant difference between vocabulary scores and following League of Legends on media or social media. The qualitative findings discovered the relation of watching broadcasts and playing League of Legends to learn vocabularies of foreign origin with its reasons. These findings are in harmony with Ashraf, Motlagh and Salami, (2014) who investigated the usefulness of online games in vocabulary learning of Iranian EFL students. In a study entitled "The Impact of Online Games on Learning English Vocabulary by Iranian (Low-intermediate) EFL Learners" The participants, (24) low-intermediate EFL learners, were randomly assigned to experimental and control groups. The experimental group learnt some new words via online computer games in 15 weeks. A vocabulary-based test, acting as pre-test and posttest, was conducted in the first and 15th weeks. The findings of their study indicated that the experimental group outperformed the control group statistically significant in the post-test. Therefore, online games proved to be more effective in learning English vocabulary for these students.
Background of the Study
English language has become the language of technology, economy, politics, and education. Therefore, school pupils, university students and a lot of ordinary people are interested in learning this language since they altogether believe that learning English language can open numerous work opportunities (Brinton & Gazkill, 1987).
It is widely noted that the new technology has brought a great relief to pedagogy and made learning processes easier, relevant to life, and simplified. However, the benefits and versatility of technology are not as evident as it is with teaching and learning English. What is more, it is widely said that teaching and learning opportunities can be expanded through appropriate application of technology (Behzadi, 2015).
According to (Atiyat, 1992cited in Bataineh, 2014 website games plays an essential role in the teaching process as students feel relaxed while practicing different language skills and breaking the educational obstacles because teachers who adopt them must consider individual differences among their students. Then, they are required to plan for what they have to do and to provide their students with the needed materials. Moreover, teachers should provide their students with the experiences to be more energetic and communicative. One useful strategy to encourage learning a foreign language is using language games. Language learning is a hard task that can sometimes be frustrating. Constant effort is required to understand, produce and manipulate the target Language. Well-Chosen games are invaluable as they give students a break and at the same time allow students to practice language skills. Furthermore, they employ meaningful and useful language in real contexts. They also encourage and increase cooperation. (Mubaslat, 2012).
Each technological instrument has its specific advantages and application with one of the four language sections (listening, speaking, reading, and writing). However, in order to utilize these techniques positively, the ELL student should be habitual with using computers and internet, and able of interacting with these techniques. The impact of technology has become enormous in learning and teaching the language in addition to the teacher's role. In other words, the role of the teacher and the role of the technology together can lead to progressive learning results (Sharma, 2009).
According to (Sari, 2006), a flashcard maker or an online game like a crossword is offered by some web-sites. These ways are helpful because they provide users with fun, but learning at the same time. Games are used for fun and the useful practice of language. Sutheo (2004) argued that web-site games are used in the various language skills. Web-site games encourage learners to have interaction with each other to create a relaxing and meaningful context for language use. Lingnau, Hoppe and Mannhaupt, (2003) claimed that web-site games are being used for the sake of entertainment for many years, whereas the presentation of simulation and games in the educational field is a recent improvement. How to change the old-fashioned classroom atmosphere and the learning atmosphere that had been categorized by the teacher as confident source and the student as a passive receiver is one of the main worries of current approaches in education. The goal then was to create an active atmosphere where the teacher is a facilitator and the student is an active member in the process.
One of the most contemporary teaching approaches in teaching vocabulary and grammar in classroom is using Website Games, which might help teachers to achieve their class objectives and it might also help students get rid of monotony as well as they feel the language itself.
The present study is an attempt to investigate the effect of using Website Games on fourth grade EFL students' learning of vocabulary and grammar in Jordan.
Statement of the Problem
The researcher noticed that in Jordan schools, students find difficulty in learning English as a foreign language (EFL) in general and learning English vocabulary and grammar in particular. The most difficult elements of learning a foreign language, particularly in an EFL context, are the grammar and the retention of vocabulary.
Furthermore, most EFL teachers complain that students are unable to control the basic grammar of the target language with a very small number of vocabulary repertoires in relation to the various years they spend for learning English. Accordingly, weakness in all vocabulary and grammar including grammar and vocabulary reveals their obvious failure in fruitfully communicating with other people using English language.
Thus, the researcher believes that the application of technological developments especially in the area of Website Games becomes of a highest request. So, this study aims to test the value of using Website Games in EFL learners' achievement in vocabulary and grammar.
Purpose of the Study
The purpose of this study is to examine the possibility of using Website Games technology and its effect on learners' vocabulary and grammar. The previous and current studies are on using Website Games technology for improving only one of language components (grammar and vocabulary). In addition, teachers and students are in need to develop their vocabulary and grammar via using Website Games technology.
Significance of the Study
This study provided pedagogical applications for teachers, students, as well as curricula designers. More significantly, the results might help teachers to have a better vision on using a variety of activities such as games in creating contexts in which the language is beneficial and meaningful. Furthermore, this study assured the significance of using web-site games for learning vocabulary on students' different communication skills development. In addition, it is likely that this study will help students in applying the vocabulary they have learned in their real life situations. English speaking sites provided excellent opportunities to meet authentic texts (Stitheo, 2004). Moreover, the importance of this study comes from the fact that it provides FL teachers with objective evidence about the usefulness of using web-site games in teaching vocabulary to fourth grade students. It also helped teachers increasing students' motivation towards learning vocabulary as they were taught through computerized games in lessons full of enjoyment (Ernoz, 2000).
-The results of the study might be formally adopted by the Ministry of Education and Ministry of Higher Education to use contemporary strategies such as Website Games in teaching vocabulary and grammar skills and competencies.
-Curricula creators may benefit from this study, in the sense that they may recommend using Website Games for developing the EFL learners' vocabulary and grammar.
Question of the Study
Are there any significant differences between the mean scores of the experimental and control group students' vocabulary and grammar due to the strategy of teaching (Using Website Games vs. regular instruction)?
Hypotheses of the Study
There are no statistically significant differences between the mean scores of the experimental and control group students' vocabulary and grammar at a<0.05 due to Using Website Games vs. regular instruction.
Limitations of the Study
The study, which is conducted during the second semester of 2019-2020 scholastic year, at Jijin Secondary School, in Irbid, Jordan investigates the effect of using Website Games on improving EFL 4 th graders' achievement in English grammar and vocabulary. The grammar aspects included in this study were chosen in light of the table of contents of the fourth grade textbook. These topics included past simple, present continuous, present perfect, countable/uncountable nouns, and comparative and superlative adjectives. The sample of the study was limited to 48 students. Secondly, the duration of the study was limited to a period of 8 weeks. It is also limited to using Website Games technology. ISSN 1948-5425 2021
Studies That Investigate the Effect of Using Website Games on Language Learning
An online or website game is a tool which may help in language learning. Playing online games may have a positive effect on the young learners' mind. Klimova and Kacet (2017) examined the effect of computer games on language learning. The purpose of study was to explore the effect of computer games on language learning and list its benefits and limitations for foreign language learning. The findings indicate that computer games, especially the educational ones, are effective in the vocabulary acquisition in foreign language learning. In addition, the results of the study indicated other benefits of using computer games in classrooms such as exposure to the target language, increased engagement, or enhancement of learners' involvement in communication. On the contrary, the findings revealed certain limitations of their use in language learning such as the fact that high interactivity may hinder the vocabulary acquisition and learning, not all games are useful for language learning, or a lack of knowledge about computer games among language teachers and institutions hinders their proper use.
Ibrahim (2017) conducted a study entitled "Advantages of Using Language Games in Teaching English as a Foreign Language in Sudan Basic Schools" aimed at investigating the advantages of utilizing language games in teaching English as a foreign language in Sudan Basic Schools. The problem of the study was that, the researcher believes that the problem of the study stems from students' low output in English language tests, lack of motivation and weak participation in class. The study adopted a quasi-empirical method. The sample of the study consisted of (30) English teachers in East Gezira Locality. The population of the study was all English teachers in the second period in the academic year 2017 / 2018. The results revealed that teaching language games are useful to EFL Learners. Language games can help students in building a good relationship with the new language. Based on these results the researcher recommended that teachers should change their role from instructors who dominate the class into educators whose role is to help, guide and support the students to acquire the foreign language.
Through what has been reviewed in previous studies, it could be concluded that games may play important role in developing the learners' language skills. Thus, the current study is an attempt to investigate the effect of using Website Games on EFL eighth graders' vocabulary and grammar achievement.
Studies That Investigate the Effect of Using Website Games on Learning Vocabulary
A number of studies have been conducted to investigate the effectiveness of Website Games on learning vocabulary and motivation. Some others indicated no effect for the Website Games and others suggesting the advantage of Website Games on students' vocabulary acquisition and motivation.
Lina Lafta Jassim (2020) conducted a study entitled "Effect of Digital Games on English Vocabulary Learning: A Meta-Analysis". This study aimed to test the impact of digital games International Journal of Linguistics ISSN 1948-5425 2021 on Asian students' English vocabulary learning. A meta-analysis was conducted to minimize heterogeneity between studies. The data were analyzed and visualized according to effect sizes. The results of this study presented that digital games played an important role in stimulating Asian students' English vocabulary learning. This study examined factors that influenced students' English vocabulary learning, including learning stage, game-aided teaching method, native language exposure, game platform, and game scenario. The results showed that native language background had the most noticeable impact, while negative transfer of Asian students' native language hindered the vocabulary learning. The effects of game platform, game-aided teaching method, and game scenario were not substantial.
Simsek, Bilal; Direkci, Bekir (2019) conducted an interesting study entitled "The Investigation of the Relationship between Online Games and Acquisition of Turkish Vocabulary". This study examined if there is a relationship between online games and Turkish vocabulary acquisition. It used the sequential explanatory design of the mixed methods research. The quantitative data were gathered of 225 students studying in two secondary schools. The qualitative data were collected from 20 participants. While quantitative data was collected by using vocabulary knowledge assessment form, for the qualitative data, a semi-structured interview form was used. There was a weighty difference between the scores in vocabulary knowledge test and the experience in playing the game, duration of the game, and time spent in watching broadcasts. There wasn't any significant difference between vocabulary scores and following League of Legends on media or social media. The qualitative findings discovered the relation of watching broadcasts and playing League of Legends to learn vocabularies of foreign origin with its reasons. Silsupur (2017) in a very fruitful study entitled "Does Using Language Games Affect Vocabulary Learning in EFL Classes?" attempted to investigate the role of using word games in L2 vocabulary acquisition. 12 female participants from Uludag University were selected for control and experimental groups. Additionally, 35 participants from different universities in Turkey were invited to attend the study. An online questionnaire about the effect of games on vocabulary learning was administered to35 participants. a vocabulary quiz was administered to both groups to determine the differences between them. The scores obtained from vocabulary quiz showed that the experimental group outperformed the control group in vocabulary quiz. In addition, the results of his study revealed that games reduce negative feelings during the learning process. It was suggested that teachers should reconsider the role of games and appreciate their educational value. examined the effect of web-site games on vocabulary acquisition, reading comprehension and motivation of Saudi students, and proposed a model for material development. The researcher's observation was that FL students can get a good use of traditional games. The participants of the study were selected arbitrarily from Madinah Directorate of Education; Anwar AL-Faihaa' School. The participants of the study were 40 male pupils. The experimental group consisted of 20 students, while the control group also consisted of 20 students. The results of the study indicated that students of the experimental group had better results than those of the control group. The researcher could conclude that International Journal of Linguistics ISSN 1948-5425 2021 web-site games could facilitate reading comprehension and vocabulary acquisition because they motivate students.
Studies That Investigate the Effect of Using Website Games on Learning Grammar
There has been a long time that in the classroom setting only the teacher's feedback in a traditional way has been used in teaching.
Adil Kayan and İbrahim Seç kin Aydın (2020) in a study entitled "The Effect of Computer-Assisted Educational Games on Teaching Grammar" investigated the impact of computer-assisted instruction and consistently computer-assisted educational games on grammar achievement and toward grammar and Turkish course of students. A quasi-experimental design study with a pretest-posttest nonequivalent group was applied. Sample of the study consisted of two classes of 6th grade students at a middle school. Computer-assisted educational games were deliberated and practiced in the experimental group within a 12-week period. Activities in the curriculum were followed during lessons for the control group. The results concerning Grammar academic achievement of students between the experimental group in which computer-assisted educational games were practiced and the control group in which the curriculum was used showed a significant difference on achievement of students. Jerome (2016) in a study entitled "The Impact of Classroom Games on the Acquisition of Second Language Grammar" examined the impact of classroom games on the acquisition of second language grammar. The sample of the study consisted of 34 Turkish learners of English as a second language. The experimental group was exposed to three class period of games, while the control group had three class periods of traditional instruction (e.g. worksheets and whiteboard explanations). A pre-test, a post-test, and a delayed post-test were given. The results showed that the participants in the experimental group were significantly motivated to learn grammar by games more than the control group. The study recommended that teachers use games in their grammar classrooms about once a week. (Khonmohammad, Gorjian & Eskandari, 2014) in a study investigated the use of games to affect learners' motivation in learning English grammar among young learners of English language in Iranian context. The design of this study was based on an experimental method. The study consisted of two groups, namely an experimental and a control group. The participants took a pre-test on grammar at the beginning of the course. Both experimental and control groups experienced 24 sessions of grammar treatment via game-based instruction controlled by the researchers and performed by the members of the group and the learners in the control group dealt with the traditional program of learning grammar through explanation. Finally, both groups sat for a post-test and data were collected and analyzed through Independent Samples t-test analysis. The results showed that the participants in the experimental group were significantly motivated to learn grammar than the control group. Implications of the study for teaching grammar are that learners' motivation in learning grammar could be enhanced through enjoyment and fun.
Concluding Remark
All the previous studies dealt with using Web-site Games, computer games and their impact on language learning. Several previous studies indicated that there is strong relationship between using web-site games and their positive effect on the EFL students' performance in one of vocabulary and grammar. However, what distinguishes this study from other studies is that it investigated the effect of using Web-site Games on EFL students of the 4th grade performance in vocabulary and grammar in Jordanian schools.
Participants of the Study
The participants of the study consisted of 48 fourth grade students from Jijin Secondary School, in Irbid, Jordan, during the second semester of the scholastic year 2019/2020. They were already divided into two groups: Group one consisted of (23) students which was chosen as an experimental group and was taught vocabulary and grammar via Website Games technology and group two consisted of (25) students, which assigned as a control group and was taught conventional method of teaching. The pre-test and post-test were of three parts; a vocabulary and grammar tests that tested students' ability to interact appropriately in communicative settings.
Design of the Study
In this quasi-experimental study, the experiment of the study was conducted for 8 weeks. The learners were already divided into two groups. Both groups were taught the same material on the same days of the week. The learners sat to a pre-test in order to assure that both groups have the same level of vocabulary and grammar. The first group, which was assigned as an experimental group and was taught the vocabulary and grammar via Website Games by the researcher, whereas, group two that was chosen as a control group taught using the conventional method by the teacher.
Instruments of the Study
To assess the effect of using Website Games on fourth grade EFL students' level of vocabulary and grammar, the researcher used a pre/post-test, which was administrated before the experiment to decide the actual level of both groups in vocabulary and grammar before starting the experiment. After eight weeks, the same test was administrated as a post-test to determine if using Website Games have an effect on students' vocabulary and grammar.
Vocabulary and grammar test was of two sections: section one was a grammar test that tested students' ability in grammar. Section two was a vocabulary test that tested their vocabulary growth.
Validity of the Test
A team of experts who are specialized in TEFL, CALL, and linguistics validated the test and the questionnaire. To achieve the face validity of the test instruments, those experts were asked to review the instrument of the study before administrating them. Remarks, comments International Journal of Linguistics ISSN 1948-5425 2021 and recommendations of these experts were taken into consideration. They made important changes on the test. They checked the test regarding the number of questions, distribution of the scores, the content, form, spelling, grammar, meaning and duration.
Reliability of the Test
To establish the reliability of the test, the researcher used the test-retest technique. The test was conducted on a pilot group consisted of (10) students who were not included in the sample of the study. Two weeks later, the pilot group sat to the same test. By using Pearson's formula, the correlation coefficient between students', scores on both testing occasions were computed to be found 90%. Thus, the test can be described as being reliable.
1
The independent variable of this study is the teaching method which includes: Teaching vocabulary and grammar aspects by Website Games and teaching vocabulary and grammar regularly. 2 The dependent variables are students' scores of both groups (experimental and control) in vocabulary and grammar.
Instructional Material
The instructional material that is used in this study was Action Pack 4 textbook. The textbook which consists of two parts (Student's book and Activity book). Each part has a number of modules contain several topics, titles, headings and sub-headings. Two modules were purposefully chosen with the help of instructors who teach Action Pack at other schools. The material that was adopted by the Ministry of Education at Jordanian schools has few features related to the English community such as authentic pictures, songs, videos, authentic reading text, materials, short stories, dialogues among native speaker and objects. The vocabularies used for teaching the experimental group were collected from British and America newspapers, magazines and, Wikipedia and many sites from the internet. (See appendix c). The grammar topics that were used in this study included the following aspects: past simple, present continuous, countable and uncountable nouns, and comparative and superlative adjectives.
Procedures of the Study
This study was conducted during the second semester of the academic year 2019-2020 at Jijin Secondary School for boys. The following procedures were followed after the researcher had the approval of the Directorate of Education in Irbid to conduct this study.
1. Jijin Secondary School was chosen to conduct this study.
2. Fourth grade students who study at Jijin Secondary School were purposefully chosen for logistic purposes as a sample of the study.
3.The sample of the study were already divided in two sections; section A was assigned as an experimental group while section B was assigned as a control group. ISSN 1948-5425 2021 4. The researcher explained the nature of the study to the students. 5. A pre-test was administrated to both groups' students to make sure that there were no significant differences between the two groups in their level of vocabulary and grammar.
International Journal of Linguistics
6.Students in both groups sat to vocabulary and grammar test at the beginning of the second semester of the academic year 2019-2020 to determine their vocabulary and grammar level before starting the experiment. 7. The material was taught four times a week for each group for a period of 8 weeks to practice vocabulary and grammar.
8.The experimental group studied the same syllabus used for teaching the control group by using website games, whereas the control group studied regularly. 9. A post-test was administrated to both experimental group and control group after the experiment.
10. Students' results were sent to a statistician to analyze the data according to the descriptive statistical methods (means, standard deviation and T-test
Equivalence of the Two Groups
Pre Vocabulary and Grammar test was administrated to both groups to identify the actual level of students before starting the experiment, means, standard deviations and t-test were used to find out any significant differences between both groups of the study, as shown in the table below. Table 1 reveals that students' scores for both groups were almost equivalent in pre-test before applying the experiment. This indicated that the two groups were equivalent, before starting the experiment. This also showed that the difference between scores of both groups on the pre Vocabulary and Grammar test was not statistically significant.
Findings of the Study
The question was "Are there any statistically significant differences between the mean scores of the experimental and control group students' vocabulary and grammar due to the strategy of teaching (Using Website Games vs. regular instruction)?" For answering this question, means and standard deviations of students' Vocabulary and Grammar the strategy of teaching (Using Website Games vs. regular instruction) were used International Journal of Linguistics ISSN 1948-5425 2021 and t-test were used to find out any significant differences between both groups of the study, as shown in table below. Table 2 shows there are statistically significant differences at (= 0.05) between the means of both groups on the post students' Vocabulary and Grammar test, in favor of the Experimental group (Using Website Games).
Discussion of the Results of the Study
The question of the study investigated if there were any significant differences between the mean scores of the experimental and control group in Vocabulary and Grammar due to the strategy of teaching (Using Website Games technology vs. regular instruction).The findings of the study have revealed that using Website Games was very effective method to develop Vocabulary and Grammar.
The experimental group students' scores in the post-test were higher than those of the control group. Accordingly, the hypothesis of the study, which reads "There are no statistically significant differences between the experimental and control groups" mean scores in Vocabulary and Grammar due to the strategy of teaching (Using Website Games technology vs. regular instructions) at α < 0.00" was rejected.
Moreover, using Website Games technology can be regarded as an effective method to develop the EFL learners' Vocabulary and Grammar. These results agree with the results of many studies that tackled many aspects and topics that related to using Website Games technology and its effect on Vocabulary and Grammar.
These results of the first question are in line with what stated by Mubaslat (2012) who conducted a study attempts to determine the role of educational games on learning a foreign language, and to compare games with more traditional practices as effective learning tools on the basic educational stage students at governmental schools in Jordan, an experimental research is conducted using three groups out of six randomly. To determine the relationship between learning a foreign language and educational games among the participants, a one way Analysis of Variance (ANOVA) is performed based on achievement levels. The results of the post test for the experimental group are so better than the controlled one which show that games have a good effect on improving the achievement for the primary stage and to create an interactive environment. It is recommended to use games since they are very effective especially for the primary stages in teaching a second language and games are helpful for the teacher as a procedure for language acquisition.
These findings are in harmony with Ashraf, Motlagh and Salami, (2014) who investigated the usefulness of online games in vocabulary learning of Iranian EFL students. In a study entitled "The Impact of Online Games on Learning English Vocabulary by Iranian (Low-intermediate) EFL Learners" The participants, (24) low-intermediate EFL learners, were randomly assigned to experimental and control groups. The experimental group learnt some new words via online computer games in 15 weeks. A vocabulary-based test, acting as pre-test and posttest, was conducted in the first and 15th weeks. The findings of their study indicated that the experimental group outperformed the control group statistically significant in the post-test. Therefore, online games proved to be more effective in learning English vocabulary for these students.
In addition, Aghlara and Tamjid (2011) report similar findings. In their research, a particular computer game was used to investigate the extent to which computer games can assist students to acquire vocabulary. In particular, they investigated 40 students who were six to seven years of age with no prior knowledge of English. The students were separated into an experimental and a control group consisting of 20 students each. After a 40-day teaching period, during which the experimental group was allowed to practice vocabulary by using computer games, it was found that the students of the experimental group outperformed those of the control group, proving that the use of computer games was more successful in teaching vocabulary to children than other vocabulary practice activities.
Conclusion
It could be concluded that using Web-site Games are very important in teaching English vocabulary and grammar because it can help students to be more confident. Besides, the results of the study showed the new experience improved their vocabulary and grammar. Using Web-site Games had a positive effect on EFL learners' vocabulary and grammar. The following conclusions should be derived from this study: Using Web-site Games is seriously advised to be used in teaching vocabulary and grammar.
Using Web-site Games has more effect on vocabulary and grammar.
Additionally, it can be concluded that the use of web-site games motivates students towards learning and they create a comfortable atmosphere. They also enhance the students' learning of vocabulary and using them in their lifelike situations within contexts weather graphically or verbally. They also facilitate the teacher's job by changing his role from a manager to a facilitator.
Recommendations
According to the finding of this study, textbook writers, researchers, teachers, and students are highly recommended to take the following recommendations into consideration to achieve their job successfully: Further similar studies on other classes can be conducted in order to make the results more valid and more applicable.
Undergraduate students can be given courses on using web-site games in teaching English in general and teaching vocabulary and grammar specifically.
It is recommended that the Ministry of Education adopt web-site games in its curricula.
It is recommended that the Ministry of Education train teachers on using web-site games. | 7,659.4 | 2021-06-17T00:00:00.000 | [
"Computer Science",
"Education",
"Linguistics"
] |
Process-Oriented Estimation of Chlorophyll-a Vertical Profile in the Mediterranean Sea Using MODIS and Oceanographic Float Products
Reconstructing chlorophyll-a (Chl-a) vertical profile is a promising approach for investigating the internal structure of marine ecosystem. Given that the process of profile classification in current process-oriented profile inversion methods are either too subjective or too complex, a novel Chl-a profile reconstruction method was proposed incorporating both a novel binary tree profile classification model and a profile inversion model in the Mediterranean Sea. The binary tree profile classification model was established based on a priori knowledge provided by clustering Chl-a profiles measured by BGC-Argo floats performed by the profile classification model (PCM), an advanced unsupervised machine learning clustering method. The profile inversion model contains the relationships between the shape-dependent parameters of the nonuniform Chl-a profile and the corresponding Chl-a surface concentration derived from satellite observations. According to quantitative evaluation, the proposed profile classification model reached an overall accuracy of 89%, and the mean absolute percent deviation (MAPD) of the proposed profile inversion model ranged from 12%–37% under different shape-dependent parameters. By generating monthly three dimensions Chl-a concentration from 2011 to 2018, the proposed process-oriented method exhibits great application potential in investigating the spatial and temporal characteristics of Chl-a profiles and even the water column total biomass throughout the Mediterranean Sea.
INTRODUCTION
The assessment and monitoring of the marine environmental status has received ever-growing attention in recent years due to the potentially critical impact of ongoing natural and human-induced changes on related ecosystem functioning and services (Puissant et al., 2021). As key players in ocean biodiversity, the alteration of phytoplankton provides information on some of the principal climate-driven effects on environmental forcing, and consequently, on marine ecosystem equilibrium, making monitoring and assessing their composition and distribution is of great importance for marine ecosystem and even global change studies (Falkowski, 2012;Sammartino et al., 2018;Kotta and Kitsiou, 2019).
A deeper understanding of the dynamics and evolution of marine phytoplankton requires three dimensions (3D) observations of the algal abundance at different temporal and spatial scales and much wider and regular coverage than currently achievable (Sammartino et al., 2020). However, the traditional measurement is mainly based on in situ sampling either through coastal monitoring programs or time-limited oceanographic cruises, or fixed platforms such as moored buoys, which can accurately describe local conditions along the water column, but are clearly inadequate to describe processes occurring within the wide range of temporal and spatial scales impacted by undergoing changes (von Schuckmann et al., 2018). Advanced instruments such as Biogeochemical Argo (BGC-Argo) float allow for autonomously observing while drifting with ocean currents according to pre-programmed procedures, which largely alleviate the shortcomings of low sampling density and costly of the traditional in situ measurements, but still far from continuous in space.
Based on the shift of seawater spectrum from blue to green wavelengths caused by the seawater substances controlling the color of seawater, ocean color remote sensing (OCRS) revolutionized marine phytoplankton assessment by estimating the concentration of Chlorophyll-a (Chl-a), one of the most commonly used bioindicator of phytoplankton abundance (Gordon et al., 1980). With the advantages of low-cost and fast observation at large scales in a space continuous manner, Chl-a product of OCRS have become one of the most important data in marine researches (McClain, 2009). However, the signals measured by OCRS sensors are result from the interaction between light and water constituents, decreasing exponentially with water depth (Morel, 1988). As a consequence, OCRS product can only reveal signals integrated within the surface detectable layer and not properties at greater depth (Siswanto et al., 2005;Uitz et al., 2006).
Given their respective strengths, there have been many attempts to combine these two kinds of observations to improve our knowledge of the interior structure of the ocean. Referring to Liu et al. (2021), these methods can be categorized into result-oriented and process-oriented according to the strategy based on. Result-oriented approach is to infer Chl-a concentration at different depths directly from other measurable ocean variables. Due to the complexity of the marine ecosystem, this kind of method usually requires tools as artificial neural network (Sammartino et al., 2020) and its variants (Puissant et al., 2021), which are capable of revealing non-linear relations. Although the performance of such methods has been validated on regional and even global scale, the huge demand for input variables and computational resources greatly limits their utilization potentiality (Erickson et al., 2019;Lee et al., 2015).
Process-oriented approach is to extrapolate vertical profile by inferring parameters that control the shape of the profile. This kind of methods generally include a profile parameterization process, which characterizes the shape of profile as a certain number of parameters by fitting profile to a certain mathematic equation . These parameters controlling the shape of profile are usually referred to as shape-dependent parameters. After profile parameterization, the relationships between surface variables and these shape-dependent parameters then can be established explicitly by empirical models (Dierssen, 2010). However, extensive practice has revealed that it is difficult for a single model to applicable to all types of vertical profiles. It is a common practice to divide profiles into several subcategories according to certain criteria (such as surface concentration or water column total concentration), and develop subcategory-specific empirical models (Morel and Berthon, 1989;Millań-Nuñez et al., 1997). Although this strategy can effectively improve the overall accuracy, profile classification is too subjective to find the theoretical basis. The advanced machine learning methods can also be used to relate the surface variable with shape-dependent parameters, but such method are not only too unintuitive, but also too complex (Silulwane et al., 2001;Charantonis et al., 2015;Sauzède et al., 2015). A simple and objective profile classification method for profile reconstruction is urgently needed.
This manuscript attempts to propose a novel process-oriented Chl-a profile inversion method by utilizing an unsupervised machine learning profile clustering method called profile classification model (PCM) in the pre-classification stage. The remainder of this paper is organized as follows: the study area and data are introduced in Section 2; the methods are described in Section 3; the accuracy of the proposed method is validated in Section 4; a discussion is presented in Section 5; and conclusions and perspectives are provided in Section 6.
Study Area
This study was conducted in the Mediterranean Sea, a semienclosed basin located in the transition zone between temperate and subtropical environments (from 30°N to 45°N and 0°E to 30°E ). The Mediterranean Sea is surrounded by continental Europe, Asia, and Africa and is connected to the Atlantic Ocean through the Strait of Gibraltar. The Mediterranean Sea has remained an area of heightened interest for global climate change research over the past few decades, partly because it plays a major role in responding to global warming (Pisano et al., 2020), and partly because it is considered an ideal natural laboratory where processes can be characterized on smaller scales than can be achieved in other oceans (Robinson et al., 2001).
In this study, the Mediterranean Sea was empirically divided into five subsea areas, namely, the northwest Mediterranean Sea (NW), southwest Mediterranean Sea (SW), Tyrrhenian Sea (TYR), Ionian Sea (ION) and Levantine Sea (LEV), following Barbieux et al. (2019). The spatial extents of the Mediterranean Sea and the subdivided regions are shown in Figure 1.
BGC-Argo Chl-a Profiles
More than 70 BGC-Argo floats deployed between 2012 and 2016 alone (Cossarini et al., 2019). The long-term deployment has made the Mediterranean BGC-Argo network one of the densest networks in the global ocean in terms of the number of profiles per unit surface area, providing unique data support for studying ecosystem characteristics and dynamics.
In this study, Mediterranean Chl-a vertical profiles were obtained from a global database of vertical profiles derived from Biogeochemical Argo float measurements publicly available at https://www.seanoe.org/data/00383/49388/ (Marie et al., 2017). This dataset includes 0-1000 m vertical profiles of bio-optical and biogeochemical variables acquired by autonomous profiling BGC-Argo floats around local noon between October 2012 and January 2016. It contains profiles of downward irradiance at 3 wavelengths (380, 412 and 490 nm), photosynthetically available radiation, Chl-a concentration, fluorescent dissolved organic matter, and particle light backscattering at 700 nm. All variables have been quality controlled following specifically-developed procedures, and data corruption by biofouling and any instrumental drift has also been verified. In the Mediterranean Sea, all Chl-a profiles were collected by 27 BGC-Argo profiling floats, and their trajectories o are shown in Figure 1, with the various floats highlighted in different colors.
As a result, a total of 1611 Chl-a vertical profiles can be matched with other data used. To match up in situ Chl-a vertical profiles with other data, a 1°×1°box centered on each float location was adopted. These profiles were collected within a wide geographical area of the Mediterranean Sea in almost equal numbers during each season, with a total of 409, 411, 415 and 376 records measured in spring, summer, autumn and winter, respectively. Therefore, these profiles could represent the spatial and temporal characteristics of the phytoplankton vertical distribution in this basin.
GLBa0.08 Analysis Data
The 5-year product (from 2012 to 2016) of the Hybrid Coordinate Ocean Model-Navy Coupled Ocean Data Assimilation (HYCOM +NCODA) global 1/12°analysis (GLBa0.08; https://www.hycom. org/dataserver/gofs-3pt0/analysis) product was used in this study. The HYCOM Consortium is a nearly real-time global ocean prediction system based on the Hybrid Coordinate Ocean Model (HYCOM) and Navy Coupled Ocean Data Assimilation (NCODA) system (Halliwell, 2004). The advantage of HYCOM-based analysis is the implementation of a substantially evolved hybrid vertical coordinate system, which remains isopycnic in the well-stratified open ocean and combines different types of coordinates, transiting into level coordinates in less stratified regions (surface mixed layer) and very shallow water and into terrain-following sigma coordinates in nearshore regions (Chassignet et al., 2007). This feature provides the HYCOM with the ability to optimally simulate coastal and open ocean circulations.
GLBa0.08 is a uniformly gridded (1/12°) global reanalysis dataset that converts native HYCOM [ab] data into NetCDF data on a native Mercator-curvilinear HYCOM horizontal grid, interpolated into 33 z-levels (Shu et al., 2014). This product provides 11 ocean essential variables, such as the surface water flux, salinity, surface salinity trend, surface temperature trend, and mixed layer depth (MLD) (Augusto Souza Tanajura et al., 2014). All these data are freely available at https://www.hycom. org/dataserver/gofs-3pt0/analysis. In this research, only MLD products pertaining to the study period were utilized.
Satellite Data
Daily a nd monthly Moderate Resolution Imaging Spectroradiometer (MODIS) level-2 Chl-a product from 2011 to 2018 was used in this study. This product is generated with an empirical relationship derived from in situ measurements of Chla and remote sensing reflectance in the blue-to -green region of the visible spectrum (Hu et al., 2012). This product is produced by the NASA Ocean Color program and is available from https:// oceancolor.gsfc.nasa.gov/cgi.
In addition to the Chl-a surface concentration, the euphotic layer depth (z eu ) has been verified as a main variable explaining the vertical variability in Chl-a along the water column (Vadakke-Chanat and Shanmugam, 2020). The euphotic layer depth is a common indicator of water turbidity, defined as the depth at which irradiance is attenuated to 1% of the initial value at the surface (Tett, 1989;Dera, 1992). Consequently, the corresponding level-3 product of the MODIS diffuse attenuation coefficient for the downwelling irradiance at 490 nm (hereafter referred to as Kd490), which can be used to estimate the depth of the euphotic layer, was also employed in this study. The MODIS Kd490 product is produced under the GlobColour project and is freely available at http://globcolour. info/. Kd490 was generated following the model proposed by Lee et al. (2005) . In this study, the method proposed by Lin et al. (2016) was used to estimate the euphotic layer depth. The specific steps are as follows: (1) The diffuse attenuation coefficient for the downwelling irradiance of the usable solar radiation (USR) (KdUSR) was estimated from Kd490. USR represents the spectrally integrated solar irradiance within the spectral window of 400-560 nm, as defined by Lee et al. (2014) .
(2) The depth of the euphotic layer was derived according to the equation proposed by Lin et al. (2016).
All these satellite data have undergone quality and flag assessments and are generated with a spatial resolution of 4 km. To match the BGC-Argo, satellite, and analysis data, the satellite and analysis data in a 1°×1°box centered on each float location were used to provide matchups.
Methodological Overview
First, all the BGC-Argo Chl-a profiles were clustered with the PCM, after which, a decision tree profile classification model was established based on a priori knowledge provided by the clustering results. Then, these profiles were parameterized with a modified Gaussian function. Finally, for each type of Chl-a profile, empirical relationships between the corresponding shape-dependent parameters derived from previous profile fitting and Chl-a surface concentration were established. A flowchart of these steps is drawn in Figure 2.
Remote Sensing Pixel-Scale Chl-a Profile Type Identification
The vertical distribution of phytoplankton exhibits different shapes under the influence of the interactions between various biological (e.g., particular species present and physiological state) and physical (e.g., currents and shear between water parcels) factors, making it difficult to obtain one set of general parameters that can be applied to extrapolate all profiles. Therefore, a common practice is to classify vertical profiles according to their characteristics and to develop an extrapolation model for each of the subcategory. To more simply and objectively identify the profile type of each pixel of the remote sensing Chl-a product, the shape characteristics of Chl-a vertical profiles in the Mediterranean Sea were first explored by clustering all BGC-Argo Chl-a profiles. According to the profile clustering result, a binary tree profile classification model based on several publicly available marine essential variables was then established.
Revealing the Shape Characteristics of Chl-a Profiles
Next, the Chl-a vertical distribution in the Mediterranean Sea was explored with the PCM.
The PCM is an unsupervised machine learning classification technique designed to reveal the vertical distribution of ocean temperatures (Maze et al., 2017a). This method relies on a Gaussian mixture model (GMM) to decompose the probability density function (PDF) of a collection of profiles into a weighted sum of multidimensional Gaussian PDFs, thus facilitating the identification of the representative patterns of a given dataset (Maze et al., 2017a;Maze et al., 2017b). After classification, profiles within a given category are more similar to each other than they are to profiles in other categories (Jain et al., 1999).
The PCM was originally developed to characterize coherent heat patterns in the North Atlantic Ocean. Since it is more impartial than subjective grouping of profiles into classes, the PCM has since been successfully applied to characterize the vertical distributions of a variety of oceanographic variables (Boehme and Rosso, 2021). However, the performance of the PCM in the classification of Chl-a vertical profiles remains unclear and needs to be further explored.
Through objective and trial-and-error approaches, the Chl-a profiles in the Mediterranean Sea were ultimately categorized into four types with the PCM, and the shape characteristics of these profile types are shown in Figure 3. Referring to the previous literature (Lavigne et al., 2015), these four types were denoted as mixed, exponential, quasi-Gaussian and Gaussian types according to their shape characteristics. As shown in Figure 3, due to the presence of the MLD, the Chl-a concentration in all profile types remained almost constant at either a deep or a shallow depth. Specifically, the MLD was much deeper for the mixed type than for the other three profile types, resulting in almost constant mixed type profiles up to a certain depth. The Gaussian type profiles exhibited typical Gaussian distribution shapes, namely, curves with a concentration peak, and the Chl-a concentration decreased with increasing distance from the peak depth. Furthermore, the vertical profiles of the exponential and quasi-Gaussian types were roughly similar in shape, but these profiles differed greatly with regard to the range of specific concentrations, the presence of concentration peaks, and the depth at which the concentration declines.
Decision Tree Profile Classification Model
By analyzing the shape characteristics of these four types of Chl-a profiles and referring to the previous literature, the Chl-a surface concentration, euphotic layer depth and MLD were adopted in this study to establish a decision tree classification model to classify the type of each pixel of the remote sensing Chl-a product.
In general, Chl-a vertical distributions could be divided into two main types: uniform and nonuniform, corresponding to mixed and stratified water, respectively (Mignot et al., 2011). The differences between these two types of Chl-a vertical distributions in the Mediterranean Sea lies in whether the MLD exceeds the euphotic layer depth. For the uniform type profile, phytoplankton are nearly evenly distributed along the water column due to strong water mixing, resulting in a homogeneous Chl-a concentration from the surface to great depths. Hence, the ratio between the MLD and euphotic layer depth is a common indicator for the identification of deep mixed water (Uitz et al., 2006). Since the euphotic layer is the maximum depth of the light zone suitable for phytoplankton photosynthesis (Khanna et al., 2009) , when the euphotic layer is shallower than the mixed l a y e r , t h e p h y t o p l a n k t o n a r e a l m o s t u n i f o r m l y distributed vertically.
As shown in Figure 3, the Gaussian, quasi-Gaussian and exponential types are all nonuniform due to the obvious presence of a subsurface chlorophyll maximum (SCM). By analyzing the shape characteristics of these three nonuniform profiles, the surface concentrations on the exponential type profiles were found to be much higher than those on the other two types of profiles ( Figure 4). In addition, Gaussian profiles could be distinguished from quasi-Gaussian profiles based on the MLD.
As a result, a binary tree based on the surface concentration, MLD and euphotic layer depth was developed to infer the vertical profile type in each pixel of the OCRS Chl-a product. As shown in Figure 5, the following three steps were included: ① A ratio of the MLD to the euphotic layer depth higher than 1 was adopted to identify profiles of the mixed type. ② A Chl-a surface concentration higher than 0.799 was used to recognize profiles of the exponential type. ③ Whether the MLD exceeded 31.1 m was considered to distinguish between Gaussian and quasi-Gaussian profiles.
If the MLD exceeded this threshold, quasi-Gaussian profiles were determined; otherwise, Gaussian profiles were identified.
In Situ Chl-a Profile Parameterization
To parameterize the profiles, each profile was fitted with a certain mathematical model, allowing the profile shape to be represented by model coefficients, which is a common practice in characterizing the profile shape of essential marine and atmospheric variables (Beckmann and Hense, 2007;Gonzaĺez-Pola et al., 2007). Various functions have been used to parameterize marine variable vertical profiles, including Gaussian functions (Morel, 1988;Morel and Berthon, 1989) and their derivatives (Uitz et al., 2006), power functions (Liu et al., 2021) and exponential functions (Ardyna et al., 2013). Among these functions, the generalized (or shifted) Gaussian function is the most popular, and has been verified as a suitable function for fitting profiles containing only a single peak; this function is quite common in coastal, upwelling, open oceans and Arctic waters (Lewis et al., 1983;Siswanto et al., 2005). Here, to better characterize profiles with a shallow SCM layer, where the surface concentration is higher than the deepest value, the generalized Gaussian function was updated as a modified Gaussian function by replacing the constant with a linear function (Uitz et al., 2006). The general shapes of generalized and modified Gaussian functions are shown in Figure 6.
In this study, the modified Gaussian function was selected to parameterize the BGC-Argo Chl-a profiles. Since a mixed type Chl-a profile exhibits a constant concentration from the surface to great depths, these types of profiles were excluded from the subsequent parameterization to construct the inversion model. As a result, 1360 of the 1611 BGC-Argo Chl-a profiles were parameterized with the following: where chl surf denotes the Chl-a surface concentration measured by the BGC-Argo floats, trend indicates the trend of the background concentration, chl max is the maximum concentration in the Chl-a profile, z max denotes the maximum concentration depth, and width denotes the half-peak width, a fitting-related parameter indicating the thickness of the SCM that is defined as the width (measured in meters) at the halfheight of the SCM layer. This fitting-related parameter is defined as controlling the shape of the profile but can be obtained only by model fitting and cannot be measured directly. According to the fitting formula used herein, the fitting-related parameters specifically refer to width and trend. Here, chl max is regarded as the total magnitude of the Chl-a concentration, not that of the background-subtracted value.
To ensure the fitting accuracy, the Curve Fitting C program (O'Haver, 1997), a nonlinear iterative curve fitting method developed at the University of Maryland, was used to implement profile fitting. This method adjusts parameters in a systematic manner via a trial-and-error strategy until the equation yields a fitted curve that is close to the expected curve. Two commonly used fitting accuracy proxies, namely, the coefficient of determination (R 2 ) and root-mean-square error (RMSE), were selected to evaluate the fitting effect. Overall, over 70% of the profiles had an R 2 greater than 0.8, and the mean R 2 was 0.93. In regard to the specific types, the Gaussian and exponential types attained the best fitting accuracy, both with an average R 2 of 0.94, followed by the quasi- Gaussian profile type with an average R 2 of 0.93. To illustrate the fitting effect more intuitively, panels (a), (b) and (c) of Figure 7 show example profiles with the highest fitting accuracy for each of these three types and their corresponding estimations. As shown in this figure, strong correlations were obtained between the measured and estimated vertical profiles, with R 2 values of 0.9873, 09942 and 0.9914 and RMSE values of 3.176, 2.80 and 3.825, respectively.
The statistical results (Table 1) for the shape-dependent parameters obtained after profile parameterization quantitatively confirmed the shape characteristics of each profile type mentioned in the previous section. Specifically, in terms of the average value, the Gaussian type yielded the lowest maximum Chl-a concentration but the greatest maximum Chl-a depth, with values of 0.4 mg m -3 and 84.25 m, respectively. The exponential type was the opposite to the Gaussian type, namely, the highest maximum concentration and shallowest maximum concentration depth were obtained, at 2.03 mg m -3 and 11.47 m, respectively. Compared to those of these two types of profiles, the maximum Chl-a concentration and corresponding depth of the quasi-Gaussian type were moderate, at 0.49 mg m -3 and 38.21 m, respectively. In terms of the half-peak width, that of the quasi-Gaussian type was the largest (81.51 m), followed by the exponential (73.08 m) and Gaussian types (55.24 m). According to the half-peak width, phytoplankton were relatively more vertically concentrated along the Gaussian profile direction and more dispersed along the exponential and quasi-Gaussian profile directions, with half-peak widths of 55.24, 73.08 and 81.51 m, respectively. The trend variable is a model fittingrelated parameter, with no explicit biophysical meaning, although this parameter can, to a certain extent, reflect the difference between the Chl-a surface concentration and the Chl-a concentration at great depths.
Chl-a Vertical Profile Inversion Model
Reconstructing the Chl-a vertical profile from OCRS product is to establish the correlations between its surface variables and its shapedependent parameters. Matchups between the BGC-Argo profiles and OCRS products provide the possibility to implement this inversion. After classifying and parameterizing the profile, the shape-dependent parameters of each profile were obtained. Then, the relationships between these shape-dependent parameters derived from the parameterization between their corresponding surface concentrations derived from the OCRS Chl-a products were analyzed. To ensure the robustness and accuracy of the established relationships, 487 poorly fitted profiles with R 2 values lower than 0.8 were discarded. As a result, 1124 robustly parameterized vertical profiles were retained to establish the vertical profile inversion model.
To establish the inversion model, all matchups were first used to reveal the global correlations applicable to all profile types. Only when robust global relationships cannot be obtained, the type-specific correlations were considered by using matchups corresponding to different type profiles. The SCM depth (z max ) has been demonstrated to be inversely proportional to the Chl-a concentration derived from satellite observations (chl sat ) over a wide range of trophic conditions (Morel & Berthon, 1989;Uitz et al., 2006;Mignot et al., 2011). This conclusion remains valid in the Mediterranean Sea. As shown in panel (a) of Figure 8, z max had a strong negative correlation with chl sat : Eq: 2 This negative relationship was observed from all matchups with an R 2 value of 0.79, thus indicating this global correlation suitable to estimate z max for all types of vertical profiles with chl sat derived from OCRS products.
Panel (b) of Figure 8 shows the correlation between the maximum Chl-a concentration (chl max ) and half-peak width of the fitting curve (width) for all profile types. Similar to z max and chl sat , a strong negative correlation between chl max and width was observed: chl max =37:55Âwidth −1:174 Eq: 3 This formulation yielded an R 2 of 0.81. The relationship between the phytoplankton biomass maximum and SCM layer thickness is well understood, and thus, width was concluded to be closely correlated with chl max in the study area. Notably, as the Chl-a maximum concentration rises, more phytoplankton accumulate within the SCM layer, nutrient consumption occurs more quickly, and the SCM layer becomes thinner (Barbieux et al., 2019).
In addition to these two relationships, another global correlation was observed between chl sat and the background trend of the Chl-a concentration (trend): trend=−0:0006Âchl sat +0:00001 Eq: 4 The linear relationship between trend and chl sat yielded an R 2 of 0.92, as shown in panel (e) of Figure 8. The potential reason for this negative relationship between these two parameters could be the positive correlation between the attenuation intensity of solar radiation and the phytoplankton biomass at the sea surface.
Having established the relationship between z max and chl sat and that between chl max and width, the Chl-a vertical profile can be reconstructed only if the relationships between z max or chl sat with both chl max and width can be established. However, except for the above parameters, no other reliable global correlations were discovered. Therefore, the possible correlations between these parameters and the combinations of the vertical profile types were explored via exploratory data analysis.
In regard to the quasi-Gaussian Chl-a vertical profiles, a strong positive linear correlation between chl max and chl sat was observed: chl max =chl sat Â1:03+0:09 Eq: 5 This linear relationship yielded an R 2 of 0.96. With this relationship, chl max could be derived from the sea surface concentration measured by satellite sensors. Then, z max can be derived from chl sat via Eq. 2, and width can be estimated from the derived chl max value based on Eq. 3. In this way, the quasi-Gaussian Chl-a vertical profile can be reconstructed.
In contrast to the quasi-Gaussian type, no direct relationships were found between chl sat and chl max or width in the Gaussian type profiles. Considering that the thickness of the subsurface layer is top-bottom limited by both light and nutrients, z eu and the MLD were introduced here to estimate width. The variation in width with respect to z eu and the MLD can be formulated as follows: This correlation yielded an R 2 of 0.89. The establishment of this equation facilitates the estimation of all shape-dependent parameters of the Gaussian type Chl-a vertical profile. First, trend and z max can be estimated based on the OCRS Chl-a products with Eqs. 2 and 4, respectively. Then, width can be calculated according to Eq. 6 based on the products of z eu and MLD. Finally, chl max can be derived from the estimated width according to Eq. 3.
Unfortunately, failure to determine the correlations between chl sat and width and chl max makes it impossible to reconstruct the exponential type Chl-a vertical profiles. Fortunately, few profiles were of this type (accounting for only 1.21% of all profiles) in the Mediterranean Sea, and thus, ignoring exponential type profiles would not significantly impact the inversion of Chl-a vertical profiles across the study region.
Accuracy Assessment Metrics
To evaluate the classification models, the most popular confusion matrix was selected in this study. Based on the selected confusion matrix, the overall accuracy (OA), user accuracy (UA) and producer accuracy (PA) were calculated. When deriving the confusion matrix, the labels provided by the PCM were regarded as the truth, and the labels obtained with the established vertical profile classification model were regarded as predictions.
To evaluate the profile inversion accuracy, the RMSE and the mean absolute percent deviation (MAPD) were selected as the accuracy metrics. Their mathematical equations are as follows: where p m denotes the shape-dependent parameter measured by the Bio-Argo profiling floats, p e denotes the corresponding model estimation, and n indicates the number of vertical profiles used for evaluation.
Spatial and Temporal Patterns of the in Situ Chl-a Vertical Profiles
To reveal the spatial and temporal patterns of the Chl-a vertical profiles in the Mediterranean Sea, the PCM classification results for all the profiles measured by the BGC-Argo floats were counted according to both the subsea area and the season, as shown in type in the Mediterranean Sea, accounting for 78.48% of all measured profiles, followed by the mixed and quasi-Gaussian profile types, accounting for 15.6% and 4.72%, respectively. As mentioned above, the exponential type attained the lowest proportion, accounting for only 1.21% of all measured profiles. In terms of the spatial distribution, the Gaussian type was widely distributed in all five subsea areas, and the quasi-Gaussian type occurred in four subsea areas (all but LEV). The distribution of the mixed type was further reduced to the ION, LEV and NW subsea areas, whereas the exponential type was found only in the NW subsea area. In terms of the temporal distribution, the Gaussian type emerged mainly in summer, autumn and winter, while the exponential type occurred mostly in early summer. The mixed and quasi-Gaussian types were primarily observed in spring and summer but also in winter in some subsea areas.
The spatial and temporal characteristics of the Chl-a vertical profiles revealed by the PCM agree with previous findings in Mediterranean bioregions (Mayot et al., 2017;Palmieŕi et al., 2018). This agreement can be considered evidence that the classification results of the PCM can provide sufficiently accurate a priori knowledge for the construction of a vertical profile classification model.
Accuracy Assessment of the Proposed Profile Classification Model
The accuracy of the OCRS-based Chl-a vertical profile classification model was evaluated based on profiles measured by the four BGC-Argo floats reserved for validation with World Meteorological Organization (WMO) identifier numbers 6901511, 6901513, 6901776, and 6902828. As a result, 151 matchups were used to evaluate the performance of the trained binary tree profile classification model. Based on these matchups, the confusion matrix was calculated, as summarized in Table 2.
The confusion matrix reveals that the classification model achieved a satisfactory overall performance with an OA value of 89%. Specifically, the PA values of the Gaussian, quasi-Gaussian, exponential and mixed Chl-a vertical profile types were 92%, 100%, 58%, and 79%, respectively, and the corresponding UA values were 96%, 63%, 88%, and 65%. Regarding the dominant Gaussian Chl-a vertical profiles, the classification model achieved both very low omission errors and very low commission errors with PA and UA values of 92% and 96%, respectively. Although the PA values of the exponential type and UA values of the quasi-Gaussian and mixed types reached only 58%, 63% and 65%, respectively, this accuracy level is considered adequate considering their low proportion in the Mediterranean Sea. The OA value of the classification model reached 89%, which is sufficiently accurate to meet the needs of inverting Chl-a vertical profiles.
Accuracy Assessment of the Established Profile Inversion Model
The matchups used in the evaluation of the proposed classification model were also used to quantitatively assess the accuracy of the established vertical profile inversion model. The performance of the profile inversion model was assessed by measuring the deviations between the measured shape-dependent parameters and corresponding model estimations. Four parameters were used to characterize the Chl-a vertical profile shape in this study, but since width and trend cannot be directly measured, only chl max and z max were thus used for evaluation purposes to avoid possible uncertainties due to the profile parameterization.
Comparisons of chl max and z max estimated with the proposed profile inversion model with the corresponding reference in situ measurements are shown in panels (a) and (b), respectively, of Figure 10. Overall, a robust relationship was observed between the z max values estimated with the inversion model based on the OCRS products and those measured by the BGC-Argo floats. This correlation was associated with a high determination coefficient (R 2 = 0.89) and low RMSE (8.49) deviation from the observed values. A satisfactory correlation was also observed between the estimated and measured chl max values. Although the R 2 value and slope of the regression were lower at 0.64 and 0.72, respectively, and although the MAPD value increased to 22%, the correlation remained highly satisfactory given the large range of the study area. The accuracy of the proposed Chl-a profile inversion model was further evaluated for each of the different types of profiles ( Table 3). In general, the inversion model achieved a higher accuracy for the Gaussian type than for the quasi-Gaussian type (R 2 : 0.62 versus 0.51; slope: 0.72 versus 0.33; RMSE: 0.13 versus 0.2; MAPD: 22% versus 37%).
To compensate for the lack of evaluation results for width and trend, the Chl-a concentrations at the different depths estimated with the vertical profile inversion model were further compared to those measured by the BGC-Argo floats ( Figure 11). Comparisons of the Chl-a concentrations at different depths can reveal not only the accuracy of the estimated width and trend values but also the overall accuracy of the profile inversion model. According to Figure 11, the predicted vertical profiles exhibited relatively high overall accuracy in all five subsea areas. The R 2 values were all higher than 0.5, and the RMSE values were all lower than 0.1. Among the specific subsea areas, the agreement between the actual measurements and predictions remained near the 1:1 line in the TYR, ION and LEV subsea areas. In contrast, in the SW and NW subsea areas, the regression lines were below the 1:1 line, indicating that the profile inversion model yielded slightly underestimated values, with R 2 values of 0.71 and 0.63, respectively.
Spatial and Temporal Characteristics of the Chl-a Profile Types in the Mediterranean Sea
To investigate the spatial characteristics of the Chl-a vertical profile types in the Mediterranean Sea, the monthly Chl-a profile types from 2011 to 2018 were generated based on the proposed Chl-a profile classification model, and the proportion of each type was determined. The results are shown in Figure 12, with the proportions of the mixed, exponential, quasi-Gaussian and Gaussian types being shown successively in panels (a) through (d). In general, the findings are consistent with the classification results of the PCM applied to the BGC-Argo Chl-a profiles, as introduced in Section 4.1. Specifically, the Gaussian type is the most common Chl-a profile type in the Mediterranean Sea, followed by the quasi-Gaussian, mixed, and exponential types. Moreover, the Gaussian type Chl-a profile is evenly distributed across the whole Mediterranean Sea, with a proportion higher than 50% in most of the areas, even exceeding 80% in the northern coastal area and near the Kerkennah Islands, a group of islands off the eastern coast of Tunisia. The quasi-Gaussian type attains a higher proportion in the southern half of the sea than in the northern half, with a maximum frequency of nearly 40%. The mixed type Chl-a profile is similar to the quasi-Gaussian type profile in terms of its maximum proportion, but its spatial distribution is the opposite, namely, the former profile type is concentrated mainly in the northern half of the sea.
The temporal characteristics of the Chl-a vertical profile types in the Mediterranean Sea were analyzed by adopting the monthly Chl-a profile classification results in 2013 (depicted in Figure 13) as an example. This figure shows that the Mediterranean Chl-a vertical profile types exhibit obvious seasonal characteristics. December and March. The time window of the quasi-Gaussian type profiles is similar to that of the mixed type profiles, but differently appears to predominate in January and February.
Spatial and Temporal Characteristics of the Chl-a Profile Shape-Dependent Parameters in the Mediterranean Sea
Based on the established Chl-a vertical profile classification and inversion models, the true Chl-a vertical profiles in the Mediterranean Sea can be visualized in 3D as long as the corresponding MODIS Chl-a and Kd490 products and HYCOM MLD product can be obtained. Continuous summer and autumn Chl-a concentrations in the Mediterranean were generated, for example, by using the June and September monthly MODIS Chl-a products and the monthly average MLD products. The Chl-a total concentration was further integrated according to the reconstructed vertical profiles.
To further verify the robustness of the proposed method, the characteristics of phytoplankton vertical distribution in the Mediterranean Sea revealed by the reconstruction results were compared to those reported in the literature. The per-pixel shapedependent parameters and derived total Chl-a are shown in Figure 14, showing that the different shape-dependent parameters exhibit distinct spatial and temporal patterns. For example, in terms of their seasonality, z max is deeper in summer than in autumn (mean depth of 88.54 ± 28.11 m versus 85.17 ± 24.94 m), while the width is larger in summer than in autumn (mean value of 64.36 ± 20.96 m versus 57.59 ± 18.67 m). However, chl max reveals the opposite pattern, with a higher mean value in September (0.38 ± 0.18 mg m -3 ) than in June (0.37 ± 0.15 mg m -3 ), which agrees with previous findings (Lavigne et al., 2015). Spatially, z max exhibits a longitudinal gradient in the Mediterranean Sea, with the value rising with increasing longitude up to a depth in excess of 120 m in the eastern Mediterranean Sea, except for some coastal areas. This conclusion agrees with previous findings reported by Lavigne et al. (2015) and Crise et al. (1999). A similar longitudinal gradient is also observed for width, which is larger in the eastern than in the western Mediterranean Sea. Furthermore, the existence of an east-west gradients of these shape-dependent parameters directly leads to a similar gradient of the Chl-a total concentration. As a result, the Chl-a total concentration is higher in the eastern Mediterranean Sea than in the western Mediterranean Sea in both summer and autumn.
CONCLUSIONS AND PERSPECTIVES
To observe the 3D spatial and temporal Chl-a concentration distribution in the Mediterranean Sea, a simple and robust process-oriented profile reconstruction method was proposed in this study. To estimate Chl-a vertical profiles based on corresponding OCRS products, the established vertical profile classification model was first used to identify the profile type in each of the pixels of the Chl-a product based on both Kd490 and MLD products, and the shape-dependent parameters of the vertical profiles were then estimated by using type-related correlations based on the proposed profile inversion model. A quantitative evaluation revealed that the proposed vertical profile classification model achieved an overall accuracy of 89%, and the proposed vertical profile inversion model attained an average absolute percent deviation value ranging from 12% to 37% for the different shape-dependent parameters.
The proposed profile estimation method was then used to generate monthly 3D Chl-a profiles in the Mediterranean Sea from 2011 to 2018. Based on the reconstructed Chl-a profiles, their spatial and temporal characteristics and those of the water column total biomass in the Mediterranean Sea were investigated. Considering the important role of that Mediterranean Sea plays in climate change research, the proposed method is expected to serve as a powerful tool in studying the status and changes in the Earth's environment.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
FUNDING
This study is supported by the High Resolution Earth Observation Systems of National Science and Technology Major Projects (05-Y30B01-9001-19/20-2); Key Special Project for Introduced Talents Team of Southern Marine Science and E n g i n e e r i n g Gu a n g d o n g L a b o r a to r y ( G u a n g z h o u ) (GML2019ZD0602); the National Science Foundation of China (61991454); the National Key Research and Development Program of China (2016YFC1400901). | 9,529.8 | 2022-06-24T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Comparative Study of Powder Carriers Physical and Structural Properties
High specific surface area (SSA), porous structure, and suitable technological characteristics (flow, compressibility) predetermine powder carriers to be used in pharmaceutical technology, especially in the formulation of liquisolid systems (LSS) and solid self-emulsifying delivery systems (s-SEDDS). Besides widely used microcrystalline cellulose, other promising materials include magnesium aluminometasilicates, mesoporous silicates, and silica aerogels. Clay minerals with laminar or fibrous internal structures also provide suitable properties for liquid drug incorporation. This work aimed at a comparison of 14 carriers’ main properties. Cellulose derivatives, silica, silicates, and clay minerals were evaluated for flow properties, shear cell experiments, SSA, hygroscopicity, pH, particle size, and SEM. The most promising materials were magnesium aluminometasilicates, specifically Neusilin® US2, due to its proper flow, large SSA, etc. Innovative materials such as FujiSil® or Syloid® XDP 3050 were for their properties evaluated as suitable. The obtained data can help choose a suitable carrier for formulations where the liquid phase is incorporated into the solid dosage form. All measurements were conducted by the same methodology and under the same conditions, allowing a seamless comparison of property evaluation between carriers, for which available company or scientific sources do not qualify due to different measurements, conditions, instrumentation, etc.
Introduction
Research in pharmaceutical technology has focused on developing and using powder carriers as structural materials for innovative drug formulations [1]. Carriers usually show a homogenous structure, high specific surface area (SSA), suitable pore size for drug incorporation, and advantageous technological properties (flow, compression, etc.) [2][3][4]. Several promising materials have been investigated in recent years, but most have been dismissed due to their non-biocompatibility or limited final processability [5]. Materials used as carriers for preparing dosage forms include microcrystalline cellulose (MCC); magnesium aluminometasilicates; clay minerals; colloidal silicon dioxide; and some others [5]. The porous structure of these materials allows the adsorption of the drug and its subsequent release in a predictable manner. Due to the ability to adsorb a drug, these materials have been used in the preparation of liquisolid systems (LSS) or solid self-emulsifying delivery systems (s-SEDDS) [2]. After incorporating the liquid component into the material's structure, a solid system with suitable properties (flow, compressibility, etc.) for subsequent processing into a solid dosage form is formed. The penetration of the liquid into the pores is influenced by several properties of both the liquid (bulk and molecular) and the carrier (geometric and surface). Upon contact of the dosage form with the solvent (dissolution medium), the active substance contained in the pores or on the surface is washed out or dissolved. Subsequently, the active substance diffuses through the pores filled with the dissolution medium [5,6].
LSS represent modern formulations in which the drug in a liquid form is incorporated into the porous structure of the carrier. The resulting free-flowing and compressible solid system shows suitable properties for the next technological processing into the final dosage form (capsules, tablets, pellets, etc.) [2,5]. The penetration of the liquid and its subsequent adsorption onto the structure of the carrier material depends on the volume and physicochemical properties of the liquid and the structural and surface properties of the carrier. When the LSS comes into contact with the dissolution medium, the already dissolved active pharmaceutical ingredient (API) is washed out from the carrier surface and released. Subsequently, the drug diffuses through the internal pores of the carrier filled with a dissolution medium [2,7,8].
The liquisolid technique for enhancing drug bioavailability has been used in many studies [9][10][11][12]. In the study by Chella et al. [13], there was the aim of enhancing the dissolution profile of valsartan (antihypertensive). Microcrystalline cellulose (Avicel ® PH 102) was used as the carrier, propylene glycol as the solvent, and Aerosil ® 200 as the coating material. After 30 min of the dissolution study, twice more valsartan was released from LSS than conventional tablets [13]. Komala et al. [14] prepared LSS with raloxifene hydrochloride (a selective estrogen receptor modulator) to improve solubility and permeation in the gastrointestinal tract. The dissolved drug was used in different concentrations (20 and 30% w/w) and adsorbed onto the carrier (Avicel ® PH 102). Aerosil ® 200 (colloidal silicon dioxide) was used as the coating material. Ex vivo tests on rat intestinal tissues showed improved drug permeation due to the ability of non-volatile solvents to increase intestinal permeability [14]. The work of Hentzschel et al. [15] aimed to substitute Avicel ® as a porous carrier by an excipient with high SSA and appropriate flow properties for LSS formulation. Carriers such as Avicel ® PH 102, Fujicalin ® (dicalcium phosphate), and Neusilin ® US2 (agglomerated magnesium aluminosilicate) were tested. Tocopherol acetate (vitamin) was used as the model drug. In this study, it was proven that the mentioned carriers have different adsorption capacities. The use of a highly sorptive excipient allows for the preparation of LSS containing higher doses of poorly soluble drugs, wherein large amounts of a liquid vehicle are usually needed for its dissolving. Using Neusilin ® as a carrier, the tocopherol acetate adsorption capacity was increased by 47% [15]. In the study by Sheth and Jarowski [16], it was proven that Syloid ® 244FP could be used as the carrier and the coating material for the formulation of LSS containing polythiazide (diuretic) [16]. LSS with montmorillonite carrier (clay mineral) was prepared by intercalation of ibuprofen (NSAID) into the clay's structure. Dissolution tests showed that montmorillonite could be used as a carrier for sustained release of ibuprofen after oral administration [17].
Other formulations that can potentially enhance the bioavailability of poorly soluble drugs represent s-SEDDS. For the preparation of s-SEDDS, an isotropic mixture of oils and nonionic emulsifiers are usually used. Subsequently, the SEDDS is adsorbed onto the powder carrier [18,19]. A prepared dosage form can release the lipophilic drug. It self-emulsifies in the gastrointestinal tract due to the present fluid and its movements [18,19]. Yi et al. [20], in their research about s-SEDDS, adjusted the release of nimodipine (a selective calcium channel blocker) using a porous carrier [20,21]. Kamel et al. [22] prepared SEDDS with rutin (flavonoid). As excipients, emulsifier, co-emulsifier, and oil were used. The formed emulsion system was adsorbed onto powder carriers: Neusilin ® US2, Fujicalin ® , and F-melt ® . During the dissolution study, 90% of the drug was released within the first 15 min [22]. Aerosil ® 200 as a carrier was used in the study of Bhagwat et al. [23]. They prepared s-SEDDS with telmisartan (angiotensin II antagonist). It was proven that the formulated powder blend had sufficient flowability for next processing and that s-SEDDS can serve as formulations with increased dissolution rate and higher drug bioavailability [23]. Gumaste et al. [24] compared the suitability for s-SEDDS preparation of six silicates. Satisfactory results were obtained only when using Neusilin ® US2 due to its acceptable compressibility [24].
As shown in the above-mentioned research, a suitable powder carrier plays a vital role in formulating modern solid dosage forms with incorporated drugs in the liquid state. The topic of this work is a comparison of the main properties of 14 powder carriers: cellulose derivatives (Avicel ® PH 101, Methocel ® E4M, Methocel ® K100 LV), silicas and silicates (Aerosil ® 200, FujiSil ® , Neusilin ® NS2N, Neusilin ® S2, Neusilin ® US2, Neusilin ® UFL2, Sipernat ® 22S, Syloid ® 244 FP, Syloid ® XDP 3050), and clay minerals (Bentonite, Vermiculite). These materials were evaluated under the same conditions for flow properties, including angle of slide, bulk and tap density, flow rate through the orifice, shear cell experiments, specific surface area, moisture content, hygroscopicity, pH leaching, particle size (measured by laser diffraction), true (pycnometric) density, porosity, and SEM structure. There is no similar comparative technical study that summarizes data regarding carriers' properties. This fact negatively influences their correct selection for the intended use. The novelty of this study is in the evaluation of carrier materials by the same methodology and under the same conditions. Obtained data may help choose a suitable powder carrier before formulating a specific dosage form such as LSS or s-SEDDS.
Particle Size
Particle size was evaluated on the basis of the volume principle by laser diffraction (LA-960, Horiba, Japan) using denatured alcohol as a liquid medium. Measurements were carried out three times (each sample was repeatedly prepared and measured; values are expressed as means). The most important value was the mean particle size. Other parameters were median size (D 50 ), D 10 , and D 90 (diameters of samples at the 50th, 10th, and 90th percentiles of the cumulative percent undersize plot, respectively). Another calculated parameter showing the width of the size distribution was span. The span of volume-based size distribution is defined as [25]:
Scanning Electron Microscope (SEM)
The surface structure and particle morphology of the porous materials were determined by SEM. The samples were placed on aluminum stubs with double-side adhesive carbon tape, coated with a 10 nm gold layer using sputtering equipment (Quorum Technologies, Lews, UK), and observed using a scanning electron microscope (MIRA3, Tescan Brno, s.r.o., Brno, Czech Republic). Obtained signals of the samples were produced by secondary electrons (SE) at 5 kV voltage and 500× magnification.
Specific Surface Area
The nitrogen adsorption-desorption isotherm of the samples was tested using a surface area and pore size analyzer (Thermo Scientific Surfer, Milan, Italy) to obtain information on the specific surface area (SSA), pore size distribution (W BJH , W HK ), and total pore volume (V t ). The silica and silicate samples were outgassed at 150 • C for 48 h under vacuum; other cellulose and clay mineral samples were outgassed at 70 • C for 12 h under vacuum. The specific surface area was calculated according to the Brunauer-Emmett-Teller (BET) method. The pore size distribution of mesopores was obtained from the corresponding adsorption branch at a relative pressure of P/P0 = 0.3-0.95 by using the Barrett-Joyner-Halenda (BJH) approach. The pore size distribution of micropores was obtained from the corresponding adsorption branch at a relative pressure of P/P0 = 0-0.35 using the Horvath and Kawazoe (HK) approach. Moreover, the total pore volume (V t ) was evaluated from N 2 adsorption at the relative pressure of 0.92 [26].
True Density and Porosity
The powder materials' true density was evaluated by the gas displacement technique using the helium pycnometer (Pycnomatic ATC, Ing. Prager Elektronik Handels GmbH, Wolkersdorf im Weinviertel, Austria), according to Ph. Eur. Porosity was calculated according to Equation (2) [27].
The percentage of moisture content in the powder materials was evaluated by a halogen moisture analyzer (Mettler Toledo, HX204, Greifensee, Switzerland) under the given conditions: standard drying program, drying temperature 105 • C, switch-off criterion 1 mg·50 s −1 . Measurements were carried out three times. Results are reported as start points (time 0-measured immediately after opening the original packaging) with hygroscopicity data.
2.2.7. pH Leaching pH leaching was evaluated as a pH of 2% water dispersion of the tested carrier. The distilled water needed for measurement was degassed by 5 min boiling and subsequent 5 min sonification. The dispersion pH was tested (after 5 min standing) using a surface pH microelectrode connected to a pH meter (pH 210, Hanna Instruments, Curepipe, Mauritius). Measurements were carried out three times.
Flow Properties
The flow rate through the orifice of powder materials was measured by a flowability tester (Ing. Havelka, Brno, Czech Republic) according to Ph. Eur. [28], with a 25 mm orifice diameter. Measurements were carried out three times, and the results are presented as mean values ± standard deviations.
Bulk and tapped volumes were tested in a tapped density tester (SVM 102, Erweka GmbH, Langen (Hessen), Germany) and served to evaluate bulk and tapped densities, Hausner ratio (HR), and compressibility indexes (CI) according to Ph. Eur. [28].
Angle of Slide
The angle of the slide was evaluated with a powder sample (10 g) placed on one end of a metal (stainless steel) plate with a chrome-plated surface. This end was gradually raised until the plate on the horizontal surface formed an angle at which the sample was about to slide [29]. Measurements were carried out three times, and the results are presented as mean values ± standard deviation.
Shear Cell Experiment
The powder rheology of each sample was measured by an FT4 Powder rheometer (Freeman Technology, Tewkesbury, UK). All samples were loaded into a 25 mL shear cell. Measurements were carried out under the laboratory temperature of 23 • C, atmospheric pressure, and relative humidity of 43%. The Mohr's circles and yield locus of the studied powder materials as obtained by shear cell experiments using 9 kPa consolidation stress allowed for the description of flow properties such as cohesion, flow function coefficient (FFc), angle of internal friction (AIF), and relative flow index (Relf). Jenike proposed FFc to describe the powder's ability to flow, which is characterized by the ratio of the consolidation stress σ 1 (the major principal stress MPS, received from Mohr stress circle of the steady-state flow for applied normal consolidation stress) to the unconfined yield strength σ c (the maximum normal stress value which a solid has, also UYS) [30]. FFc can be calculated using Equation (3) [31].
An angle of internal friction determines the powder's flowability (easily or poorly flowing) and ranges from 0 • to 90 • [32]. The relative flow index Relf proposed by Peschl classifies the powder's cohesion level. The Relf was calculated using Equation (4) [33].
where σ 2 is the minor principal consolidation stress at a steady flow. Measurements were carried out three times.
Results and Discussion
This work aimed to compare the physical and structural properties of 14 powder carriers potentially suitable for use in pharmaceutical technology. Powder materials were tested for flow properties, true density and porosity, particle size characterized by laser diffraction, specific surface area, moisture content, hygroscopicity, pH leaching, shear cell experiments, and scanning electron microscopy.
Particle Size
The size of the particles, or their distribution, impacts the technological processability and content uniformity of the final solid dosage forms. Components in solid dosage forms tend to homogenize better when they are of comparable particle size [34]. On the other hand, smaller particles can benefit from adhering to the surface of predominantly presented larger particles and coating them [35]. In general, materials with large particles show better flowing properties. Carriers with small particle sizes usually offer higher sorptive capacity due to large surfaces, but their flow and processability are limited. Mean particle size (MPS), span, D 10 , D 50 (median particle size), and D 90 were analyzed by laser diffraction ( Table 1). The measured values could be influenced by the shape of the particles (laser diffraction interpolates the signals to a spherical shape), so the particle size distribution values should be confirmed, e.g., by image analysis from a scanning electron microscope ( Figure 1) [36]. Cellulose derivatives showed particle size data confirming the manufacturers' specifications. For Avicel ® PH 101, the manufacturer indicated the median particle size about 50 µm, which was confirmed (52.5 µm) ( Table 1) [37]. The particle size of the two types of Methocel ® was not similar, being visible on SEM images ( Figure 1). Methocel ® E4M (142.8 µm) contains more long fibrous particles than Methocel ® K100LV (74.3 µm) [38].
Span represents an important parameter that expresses the width of the particle size distribution (the lower its value, the narrower the particle size distribution) [25]. The values of the measured materials ranged from 0.38 (Bentonite) to 2.14 (Neusilin ® NS2N) ( [35]. Samples with higher values of the span as cellulose derivative (Methocel ® K100LV 1.75) and silicates (Fujisil ® 1.86, Neusilin ® NS2N 2.14, Neusilin ® S2 2.00) could cause problems during solid dosage form manufacturing due to their size non-uniformity. Figure 1. In the case of cellulose derivatives, Avicel ® PH 101 was observed in particular as cellulose microcrystals are packed tightly in the fiber direction in a compact structure resembling bundles of wooden matchsticks placed side by side [45]. The manufacturer indicates that Methocel ® K100LV contains longer, more fibrous particles than Methocel ® E4M [38]. This statement was confirmed in Figure 1 and using a measurement by laser diffraction where we measured mean particle size for Methocel ® E4M (27.71 µm) and Methocel ® K100LV (82.90 µm).
Silicas and silicates, especially FujiSil ® , Neusilin ® NS2N, and Neusilin ® S2, showed spherical well-agglomerated particles. All four types of Neusilin ® used in this study were significantly different in the images. Images confirmed that three types of magnesium aluminometasilicates have different kinds of particles. Three types (NS2N, S2, US2) are available on the market in granular form, and one type (UFL2) in powder form [46]. Other materials showed nearly nonspherical particles. A comparison of Syloids ® (Figure 1) showed a difference in their particles. Syloid ® XDP 3050 was similar in appearance and particle size to the granulated form of aluminometasilicates. Reuzel et al. [47] claimed that Aerosil ® 200 and Sipernat ® 22S had a spherical primary particle shape [47]. In Figure 1, it is shown that the claim can be confirmed under higher magnification.
Clay minerals have several types of morphology. Bentonite shows the typical surface appearance called clay largely composition [48]. This applies to the relatively homogenous soils where most particles are characterized by a varied anisotropy of shape [36].
Specific Surface Area (SSA)
Specific surface area is one of the most important factors for selecting a powder material as a suitable carrier [42]. SSA is related to the ability of the material to absorb the drug onto its surface and in "open" pores. The higher this value is, the higher is the material absorption capacity [5]. The BET method was used to measure tested powders' SSA ( Table 2). The highest measured SSA values were obtained for silicate samples, mainly for new porous silica material available on the market called FujiSil ® (SSA 374.55 ± 4.48 m 2 /g; the size of mesopores 9.33 nm, micropores 0.41 nm, and pore volume 0.46 cm 3 /g). This indicated its promising possible use as a porous carrier [49]. Some of the tested magnesium aluminometasilicates showed higher values than those that were presented by the manufacturers. The manufacturer indicated that Neusilin ® UFL2 (SSA 350.33 ± 2.88 m 2 /g; the size of mesopores 7.62 nm, micropores 0.45 nm) and Neusilin ® US2 (SSA 342.16 ± 2.72 m 2 /g; the size of mesopores 7.99 nm, micropores 0.44 nm) should have SSA 300 m 2 /g and Neusilin ® S2 (SSA 168.82 ± 1.04 m 2 /g; the size of mesopores 5.01 nm, micropores 0.46 nm) 110 m 2 /g [50]. The results of this method strongly depend on the conditions of sample handling, such as the time or temperature of degassing. The accuracy of the measurements was confirmed in the case of colloidal silicas Aerosil ® 200 (SSA 190.48 ± 1.74 m 2 /g; the size of mesopores 7.04 nm, micropores 0.50 nm, and pore volume 0.24 cm 3 /g) and Sipernat ® 22S (SSA 188.92 ± 2.06 m 2 /g; the size of mesopores is 9.70 nm, micropores 0.48 nm and pore volume is 0.24 cm 3 /g). The results were compared to the study of Reuzel et al. [47], where Aerosil ® 200 showed SSA 200 m 2 /g and Sipernat ® 22S 190 m 2 /g [47]. Regarding the pore size, it is important to note that Syloids are highly porous (Syloid ® 244FP, 358.73 ± 3.26 m 2 /g; Syloid ® XDP 3050 289.32 ± 2.29 m 2 /g) and have a pore size 2-50 nm, as confirmed by the measurements (Syloid ® 244FP mesopores 10.66 nm, micropores 0.50 nm; Syloid ® XDP 3050 mesopores 10.58 nm, micropores 0.50 nm) expressed in Table 2 [51]. Pore volume was evaluated for pores smaller than 100 nm in diameter and was determined from desorption data [52]. The pore volume of the measured data ranged between 0.02 and 0.73 cm 3 /g ( Table 2). In the study of Westermarck et al. [52], two types of measurements of pore volume were compared, and we observed that granules had a higher value of pores than powders [52]. This condition was partially met because the agglomerated magnesium aluminometasilicates (Neusilin ® NS2N 0.67 cm 3 /g, Neusilin ® S2 0.30 cm 3 /g, Neusilin ® US2 0.69 cm 3 /g) had one of the highest pore volume values (Table 2). The lowest measured value was Vermiculite (SSA 15.88 ± 0.30 m 2 /g; the size of mesopores 3.34 nm, micropores 0.38 nm, and pore volume 0.02 cm 3 /g).
This test was not applicable for cellulose derivatives. Relatively low SSA of cellulose derivatives could cause the inability to measure this value in this study. Some research groups that used the BET technique obtained several pieces of specific surface area data. It was found that the value of SSA of cellulose derivatives usually ranges between 1 and 20 m 2 /g [42].
True Density and Porosity
The density of the powder is mainly related to the properties such as dilution potential and the size of the final solid dosage form (compressing of denser powder leads to the possible reduction of unsuitable properties of API) [53]. Experimentally measured values of true density were in the range from 1.29 ± 0.00 g/cm 3 to 2.66 ± 0.02 g/cm 3 ( Table 7). The lowest true density was observed for cellulose derivatives (Avicel ® PH 101-1.58 ± 0.00 g/cm 3 , Methocel ® E4M-1.29 ± 0.00 g/cm 3 , and Methocel ® K100LV-1.33 ± 0.00 g/cm 3 ). The literature indicates true density for MCCs from 1.51 to 1.67 g/cm 3 , while the true density of a perfect cellulose crystal is between 1.58 g/cm 3 and 1.60 g/cm 3 [54]. The measured MCC sample confirms this ( Table 3). The highest true density was observed for a sample of silica Aerosil ® 200 (2.66 ± 0.02 g/cm 3 ) as expected for amorphous precipitated material [55]. Values of other silicas and silicates did not differ significantly from each other, and their values ranged from Neusilin ® NS2N (2.14 ± 0.02 g/cm 3 ) to Syloid ® 244FP (2.44 ± 0.02 g/cm 3 ). From Table 3, it is evident that the highest values reached silicas and silicates. The literature shows that highly porous materials usually have high true density (helium reaches very small pores with the open character) [56]. High density values also reached clay minerals Bentonite (2.42 ± 0.00 g/cm 3 ) and Vermiculite (2.64 ± 0.00 g/cm 3 ) ( Table 3).
Moisture Content, Hygroscopicity, and pH Leaching
The moisture content of solid-state pharmaceutical products is one of the main factors that affect drug stability, mechanical properties, processability, etc. [58]. In the case of powder materials, the values of moisture content are connected to the porosity because water can fill the open pores of the material and decrease its porosity. Values of moisture content for all tested powder materials ranged between 1.6 and 8.2% (Table 4). Relatively low values reached cellulose derivatives (Avicel ® PH 101 2.9%). As for water-swellable cellulose derivatives, Methocel ® showed higher values (Methocel ® E4M 3.6%, Methocel ® K100LV 4.7%) compared to Avicel ® PH 101, which is related to their higher hygroscopicity [59]. Low moisture content values were also reported for some silicas (mainly Syloid ® 244FP 3.6% and Syloid ® XDP 3050 3.4%). Syloids can be used to increase the stability of moisture-sensitive APIs [60] and are recommended to improve the physical stability of the dosage form (moisture reduces the feasibility of the drug formulation, reduces the flowability of the pharmaceutical composition, and reduces tablet hardness) [60].
Hygroscopicity is an unfavorable property of many materials used in pharmaceutical technology. Hygroscopicity can reduce the adsorption capacity of the drug due to adsorbed water, change the physical properties of used materials (agglomeration), and sometimes lead to specific requirements of processing conditions and packaging to ensure stability of the drug. From Table 4, it is evident that the moisture content increased from time 0 to 720 h for all the tested materials stored in conditions of increased temperature (40 • C) and high relative humidity (75%).
Callahan et al. [61] showed that cellulose derivatives belong to a slightly hygroscopic materials. The moisture content did not increase at a relative humidity below 80%. After storage for one week above 80% RH, the increase in moisture content was less than 40% [61]. This was not completely confirmed in this study. An increase in the moisture content of Avicel ® PH 101 + 4.4%, Methocel ® E4M + 4.1%, and Methocel ® K100LV + 3.5% during 720 h of testing (Table 4) was found.
Carrier's pH can influence the drug stability, its transition between salt-base, its compatibility with other materials, etc. All tested powder materials showed neutral or slightly basic pH (Table 5). Experimentally measured pH of Avicel ® PH 101 was marginally higher (pH 7.3) than what was indicated by the manufacturer (pH 5.0-7.0) [58]. Silicates, mainly magnesium aluminometasilicates, contain -OH groups associated with Si, Mg, and Al in their structure, leading to different acidic and basic strengths [63]. The manufacturer indicated that Neusilin ® US2 (pH 6.9) and Neusilin ® UFL2 (pH 6.9) have a neutral pH, and Neusilin ® S2 (pH 9.4) and Neusilin ® NS2N (pH 8.3) have an alkaline pH, which was proven [46]. The same findings were observed for other silicas and silicates, such as Aerosil ® 200 (pH 6.3), where producer declared that the pH range should be 0-7.5 [39]. The study of Reuzel et al. [47] evaluated the physical properties of some powder materials such as silicate Sipernat ® 22S. It was stated that this material has a pH 6.3, which is slightly lower than that measured in this study (pH 7.4) ( Table 5) [47]. One of the most alkaline pH was observed, focusing on clay minerals, especially Bentonite (pH 9.5). This was also reported in the study by Kaufhold et al. [64], which showed that the pH of this clay is in the range of 8.5-10.0 [64].
Flow Properties
Flow properties of powder materials were assessed by methods based on the material mobility, e.g., the ability of particles to migrate, such as flow through the orifice (flowability), angle of slide, and parameters such as CI and HR [28].
From the measured data of powder flow properties (Table 6), it is evident that most of the tested materials do not show good flow properties when used alone. Most of them are functional excipients to prepare solid dosage forms after their suitable combination with other excipients such as lubricants or incorporating a liquid phase into their structures [59,67]. For example, colloidal silica is the preferred coating material in the preparation of LSS because of its ability to adsorb the excess liquid from the carrier and ensure the good flowability of the created mixture [68]. Microcrystalline cellulose is commonly used as a carrier in LSS because of its promising sorptive properties, long-term utilization, low price, good stability, and availability in different particle sizes and moisture grades [59]. Magnesium aluminometasilicates are used in the pharmaceutical industry as carriers for solid dispersions to improve drug dissolution or to granulate oily formulations and increase formulation stability [68].
Angle of Slide
According to the study by Spireas et al., there is the appropriate value of the angle of slide 33 • (usually evaluated for powders with already adsorbed liquid phase) [69]. This value was closest to Neusilin ® S2 (36.3 ± 1.2 • ) and FujiSil ® (37.3 ± 0.6 • ) ( Table 7). Other powders showed higher values (39.3 ± 2.5 • -53.3 ± 0.6 • ) of the angle of slide than the optimum, which indicated their worse flow properties [44]. The highest angle of slide showed Aerosil ® 200 (53.3 ± 0.6 • ). It is caused by its fluffy structure and fine aggregates (8.9 µm) [70]. A higher angle of slide was also reported by Neusilin ® UFL2 (43.3 ± 2.5 • -lower than that presented by the manufacturer at 45.0 • ). It is caused by a powder structure and small particle size (measured 6.2 µm (Table 1); declared by the manufacturer at 3.1 µm) [46]. With decreasing particle size, the flow function line progressively worsened the flow properties [66].
Shear Cell Experiments
Shear properties inform how easily the consolidated powder starts to flow. The flow begins when the yield point of the powder is overcome. The yield point is affected by physical properties such as the size and shape of the particles, the moisture content in the material, or the number of flow additives [26]. All the measured powders were exposed to consolidation stress during handling, transport, and storage. This exposition can change mechanical interparticulate forces and density of the powder and impact the measurement [26,71].
The flow properties of tested powders were measured using a shear cell (Table 8) [48]. However, Avicel ® PH 101 (36.7 • ) and Methocel ® K100LV (45.0 • ) indicated slightly cohesive behavior. This was confirmed by flowability testing (Table 6), where Methocel ® E4M was evaluated with its properties (3.21 ± 0.38 s) as the best in the group of cellulose derivatives. These results agree with the manufacturer, who declared better flow properties of Methocel ® E4M [49]. Results of the relative flow index proposed that Avicel ® PH 101 (Relf = 15) and Methocel ® K100LV (Relf = 18) were slightly cohesive, and Methocel ® E4M (Relf = 13) was a non-cohesive material. Most tested silica and silicate powders were characterized as free or easy-flowing materials ( Table 8). FujiSil ® , Neusilin ® US2, Neusilin ® S2, and Syloid ® XDP 3050 were immeasurable on shear cell; even the higher consolidation pressure was used during the measurement (up to 15 kPa). This indicated the free-flowing character of the materials. As an easily flowing were evaluated silicate and silica Neusilin ® UFL2 and Sipernat ® 22S (FFc was 6 in both cases). These results did not correlate with flowability measurements (Table 6), where these two materials were immeasurable. The method with the standardized funnel is limited by the stagnation of the outflow when cohesive powders are tested [72]. Neusilin ® UFL2 and Sipernat ® 22S were evaluated as cohesive, according to the shear cell experiments. The case of the angle of internal friction showed the lowest values Aerosil ® 200 (27.9 • ) and Neusilin ® NS2N (19.2 • ). This was also related to the relative flow index, which indicated that incoherent samples have better flowability [62]. These claims also correlated with results of FFc for other silica and silicates such as Neusilin ® NS2N (FFc = 58), Syloid ® 244FP (FFc = 38), and Aerosil ® 200 (FFc = 16), which indicated that these were free-flowing materials.
Clay minerals were evaluated as rather cohesive with their relative FFc for Bentonite (FFc = 4) and Vermiculite (FFc = 5). The claim about the cohesive property was confirmed by the results of Bentonite flowability (Table 6), where this powder clogged the orifice. Clay minerals were evaluated as very cohesive with values of Bentonite (Relf = 3) and Vermiculite (Relf = 4). The cohesive behavior of the clay minerals powder materials was confirmed by several studies, for example, Broms et al. [48].
Graphical Visualization of Results
Graphical visualization (Figure 2) was created from selected results to compare the tested carriers better. The X-axis represents tested parameters. The Y-axis was divided into ideal range (green color; sign +), acceptable range (orange color; the middle part of the graph), and non-acceptable range (red color; sign −). Each carrier is represented by one line.
Conclusions
Powder carriers represent materials useful for many applications in the pharmaceutical industry (formulations of LSS, s-SEDDS, etc.). The lack of comparative summarized data about their properties can limit their correct selection for the intended use. Fourteen available carrier materials were evaluated in this study. The materials with the most promising and balanced properties were evaluated as being magnesium aluminometasilicates (Neusilin ® US2). New materials on the pharmaceutical market, such as FujiSil ® or Syloid ® XDP 3050, were evaluated as promising porous carriers. Some of the powder materials with small particles and worse flow properties (Neusilin ® UFL2, Bentonite, and Aerosil ® ) could be advantageous as coating materials that cover the surface of primary carriers during the formulation of LSS or s-SEDDS.
Author Contributions: K.K.: data curation, investigation, methodology, resources, software, visualization, and roles/writing-original draft. B.B.P.: data curation, methodology, resources, and software. S.H.: data curation, methodology, resources, and software. J.V.: data curation and software. D.V.: funding acquisition and supervision. J.G.: conceptualization, data curation, investigation, methodology, project administration, supervision, and writing-review and editing. All authors have read and agreed to the published version of the manuscript. | 7,652.4 | 2022-04-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Local uniform stencil (LUST) boundary condition for arbitrary 3-D boundaries in parallel smoothed particle hydrodynamics (SPH) models
This paper presents the development of a new boundary treatment for free-surface hydrodynamics using the smoothed particle hydrodynamics (SPH) method accelerated with a graphics processing unit (GPU). The new solid boundary formulation uses a local uniform stencil (LUST) of fictitious particles that surround and move with each fluid particle and are only activated when they are located inside a boundary. This addresses the issues currently affecting boundary conditions in SPH, namely the accuracy, robustness and applicability while being amenable to easy parallelization such as on a GPU. In 3-D, the methodology uses triangles to represent the geometry with a ray tracing procedure to identify when the LUST particles are activated. A new correction is proposed to the popular density diffusion term treatment to correct for pressure errors at the boundary. The methodology is applicable to complex arbitrary geometries without the need of special treatments for corners and curvature is presented. The paper presents the results from 2-D and 3-D Poiseuille flows showing convergence rates typical for weakly compressible SPH. Still water in a complex 3-D geometry with a pyramid demonstrates the robustness of the technique with excellent agreement for the pressure distributions. The method is finally applied to the SPHERIC benchmark of a dry-bed dam-break impacting an obstacle showing satisfactory agreement and convergence for a violent flow. © 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license. ( http://creativecommons.org/licenses/by/4.0/ )
Introduction
Smoothed particle hydrodynamics (SPH) is becoming increasingly popular to apply to a range of applications including hydraulics, biomedical and nuclear applications [5,14,[41][42][43] . However, the robust numerical simulation of these applications is highly dependent on the performance of the boundary conditions employed within the SPH model. Since the early application of SPH to free-surface flows and confined flows found throughout engineering applications, boundary conditions have been the subject of intense scrutiny and development [49] . Despite this concerted effort by the SPH community, imposing boundary conditions in SPH is still an open problem due to the Lagrangian nature of SPH and the kernel based interpolation. This is recognised by boundary conditions being identified as Grand Challenge in the Smoothed Particle Hydrodynamics European Research Interest Community (SPHERIC, http://www.spheric-sph.org ). * Corresponding author.
Methodologies for imposing the solid wall boundary conditions can be grouped in three generic methodologies: repulsive functions, fictitious particles and boundary integrals [50] .
The widely used repulsive function approach, proposed by Monaghan [35] and Monaghan and Kajtar [36] describes the walls by particles which exert a repulsive short-range force similar to a Leonard-Jones potential force on fluid particles. With this approach 2-D and 3-D irregular geometries can be easily discretized, but the kernel truncation near the wall is not explicitly treated and can introduce non-negligible inaccuracies.
Another widely used method to describe boundaries in SPH [1,7,27] is to use fictitious particles to represent the presence of the boundary. This can take the form of mirror or ghost particles as introduced by Randles and Libersky [39] . However, extending the method to 3-D is challenging for irregular geometries. Alternatively, fictitious particles can take the form of stationary fluid particles or similar [1,26] to whom appropriate properties are applied to enforce the physically correct conditions of impermeability. An example of such an approach is the dynamic boundary condition (DBC) [7] currently implemented in the DualSPHysics code [8,9] which is suitable to reproduce complex geometries and is ideally suited to parallelisation on heterogeneous architectures such as GPUs, but suffers from drawbacks such as over-dissipation and spurious pressure oscillation. The approach of Marrone et al. [27] which used interpolation of the particle properties inside the fluid domain transformed to the boundary particles reduces these effects, similar to the work of Adami et al. [1] .
Another alternative is based on the work of Kulasegaram et al. [20] who proposed using an approximation to the surface integral to account for the truncated kernel. This introduces a correction factor into the SPH summations and additional terms in the conservation equations in order to mimic the presence of the boundary. The work of Kulasegaram et al. [20] uses an empirical function originating from variational principles to approximate the force. This concept was further developed in Bierbrauer et al. [4] , De Leffe et al. [10] , Marongiu et al. [25] , Ferrand et al. [12] and Mayrhofer et al. [29] avoiding the use of an empirical function. These methods have the advantage of restoring zero consistency in the SPH interpolation and can reproduce to first order the physical conditions of Neumann wall boundary stress conditions and near wall shear stress for low Reynolds number flows. However, as noted recently by Valizadeh and Monaghan [49] , the discretization of complex 3-D geometries and/or multi-phase flows is challenging [28] . More recently using a cut-face process and improved secondorder operators, Chiron et al. [6] have extended the 3-D surface integral idea of Mayrhofer et al. [28] to the Riemann-based SPH formulation that employs an evolution equation for particle volumes.
None of the aforementioned approaches has emerged as being uniquely superior to other boundary techniques. Valizadeh and Monaghan [49] made a comparison of several boundary conditions showing that for weakly compressible SPH, the formulation of Adami et al. [1] is superior in terms of pressure fields in the vicinity of the wall and stability. However, it is not clear how this can be extended to 3-D geometries and eventually to higher order convergence. Whilst many of these boundary conditions work well for academic test cases with simple geometry, their extension to arbitrarily complex 3-D geometries is not straightforward. Furthermore, SPH is inherently computational expensive, therefore boundary conditions algorithms have to be suitable to be accelerated by means of heterogeneous hardware such as GPUs. This motivates the methodology developed in this paper.
An interesting variant of the fictitious particle approach was proposed by Ferrari et al. [13] , who similar to Yildiz et al. [51] , used a local point symmetry method which is able to discretize arbitrarily complex geometries without introducing empirical forces. This approach has some very clear theoretical attractions, namely that is it possible to generate an individual stencil for a particle of any phase or identity such as in multi-phase flows, and should be general to any geometry. However, when the method was applied to shallow water equations (SWE) [46] it was evident that the original proposal of Ferrari et al. [13] was insufficient to complete the missing kernel support and prevent particles from penetrating the boundary and an enhancement was needed. This was modified again for the Navier-Stokes equations in 2-D [16] to address issues in the inconsistencies in the moments of the kernel and their derivatives which are indicators of the accuracy of any boundary condition.
Although the virtual boundary particles methodology has merit when applied in 2-D, such an approach can be cumbersome in 3-D. Representation of 3-D domains using predefined virtual particles becomes computationally memory intensive since the 3-D domain boundary triangles need to be discretized with geometrical points. Moreover, each fluid particle interacts with all the virtual particles within its support and large numbers of virtual particles are required to represent the solid boundary in 3-D.
During the development work presented in Vacondio et al. [46] and Fourtakas et al. [16] , it became apparent that another option existed to formulate a boundary condition that could fulfil all the attractions of the Ferrari et al. [13] method but in a practical manner. This idea is the focus of this paper which proposes a new formulation of the local fictitious particles approach where a local uniform stencil (LUST) follows each individual fluid particle and only fictitious particles located within the region of the boundary are activated. Boundary are represented by lines (in 2-D) or triangles (in 3-D) without introducing virtual particle discretization. The method is easy to extend to arbitrarily complex 3-D geometries and has been accelerated for heterogeneous architectures such as GPUs. Approximate zero-th and first order consistency are ensured by using a fully uniform fictitious particle stencil. The proposed wall boundary condition is implemented in the open-source GPU code DualSPHysics [9] . This paper is structured as follows; In Section 2 , the governing equations and the discretization of the standard weakly compressible SPH equations is presented together with a correction for the density diffusion term. This is followed by the description of the methodology of the LUST fictitious particles and the implementation on GPU. The accuracy and robustness of the new methodology is then assessed with several validation cases including the Poiseuille flow in 2-D and 3-D, a 3-D still water case with a pyramid in the domain and the SPHERIC benchmark 3-D dam break impact test.
SPH formalism
In SPH, the kernel approximation of a scalar function f at position x in the continuum domain is obtained as: where is the volume of the integral, W is the smoothing kernel function, h is the smoothing length, used to define the influence area of the smoothing kernel function. In practical applications kernels have a compact support, meaning that W goes to zero at a finite distance from x . In a discrete domain Eq. (1) can be approximated with a summation: where f ( x j ) = f j is the value of the scalar function f at particle j with position x j and associated volume V j and N is the total number of particles. The ... symbol denotes an SPH interpolation and will be dropped for simplicity in the rest of the paper.
Starting from Eq. (1) it is possible to derive the following Equation to approximate the derivative of gradient of a scalar function f : The discrete approximation of Eq. (3) is the following: A variety of kernels have been proposed in the literature [23] such as the Gaussian kernel, B-spline kernels, and the 5thorder Wendland kernel. The latter has been adopted in this paper due to its simplicity and low computational requirements: where a d = 7/(4 π h ) and a d = 21/(16 π h ), respectively, in a two-and three-dimensional space and R = | x -x ' |/ h . More details on the SPH formulation and recent developments can be found in [50] .
Governing equations
This section presents the governing equation in SPH form. The Navier-Stokes equations can be written in a continuous Lagrangian form for a weakly compressible fluid as where u is the velocity, ρ is the density, P is the pressure, τ is the deviatoric component of the total stress and g is the gravity acceleration.
The continuity and momentum equations are coupled by means of the Tait's equation of state (EOS): with the polytrophic index γ = 7, c 0 = ∂ P/∂ ρ is speed of sound and subscript 0 denotes reference values.
Herein we adopt the classical SPH formulation of Eq. (4) as the aim of the present paper is to present a novel way to discretize wall boundaries and not investigate more recent formulations now available in literature which address instabilities in SPH [22,38,44,47,48] . Without such stabilisation, the test cases presented in Section 6 are more challenging and might indicate possible problems in the boundary treatment.
Throughout this paper, i and j subscripts denote the interpolated particle and its neighbours, respectively. The density evolution and momentum of the particles follow the Navier-Stokes equations [33] using the addition of the δ-SPH formulation of Antuono et al. [3] where the subscripts ij denote the difference in value f ij = f i -f j for a field variable f , hence u ij = u i -u j and x ij = x i -x j and m j the mass of the neighbouring particle. In the dry-bed dam-break impacting an obstacle test case, the artificial viscosity [34] ; Ferrari et al. [13] has been used to stabilize the solution and provide an artificial viscous force denoted as ij , where α π is the free parameter, and the overbar denotes the average values of the i and j particles such that c i j = 1 2 ( c i + c j ) and where η = 0.001 h to avoid singularity as r ij → 0.
In the continuity equation the second term on the RHS is a diffusion term added to prevent spurious oscillations in the density field.
The time integration method adopted is the symplectic predictor-corrector scheme bounded by the CFL condition [35] . Further details on the time integration scheme can be found in Crespo et al. [9] .
Improved density diffusion term for gravity driven flows
The diffusion term in Eq. (8) , can take different forms accordingly to the formulation adopted for ψ ij as clearly explained in Antuono et al. [2] . For example Molteni and Colagrossi [32] suggested that, whereas later Antuono et al. [3] proposed the so-called δ-SPH formulation, where ∇ρ L j and ∇ρ L i are the normalized gradients computed, respectively, at particle i and j . In comparison with Eq. (13) , in Eq. (14) high order terms are introduced in Eq. (14) by Antuono et al. [3] , to ensure consistency at the free surface. Therefore, δ-SPH formulation can be successfully adopted for gravity dominated flow (see Antuono et al. [2] and Green et al. [17] for a comprehensive analysis).
In the present work we introduce a different formulation for the diffusion term of Eq. (8) , aiming at restoring consistency at the free surface avoiding to compute the density normalized gradients.
The key idea is to use the same formulation proposed by Molteni and Colagrossi [32] but substituting the dynamic density with the total one. Thus the term ψ ij takes the following form: where the superscript D denotes the dynamic density or pressure.
Since ρ D = ρ Tρ H (where superscript T and H denote the total and hydrostatic component, respectively), Eq. (13) can be rewritten in term of the hydrostatic and total parts as In the context of the weakly compressible SPH, the equation of state relates the density to the total pressure at the particle location. However, only the hydrostatic part of the pressure is needed. Using Eq. (7) the hydrostatic density difference can be obtained by The term P H i j is simply the hydrostatic pressure difference of particle i and j , where z ij is the vertical distance between particle i and j and In comparison with the formulation of the diffusive term proposed by Molteni and Colagrossi [32] . Eq. (16) improves the behaviour of pressure near the wall boundaries, as demonstrated in Section 6 , avoiding to compute the normalized density gradient. However, it must be noted that the formulation of Antuono et al. [3] of Eq. (14) is general and it can be adopted for any type of flow, whereas the use of total density in the diffusive term of Eq. (15) is expected to work better than Eq. (13) only for gravity-dominated flows.
Wall representation using triangles
The representation of the solid boundary line using SPH particles is well documented in literature [49,50] . SPH particles located on the boundary are either used as interpolation points or for geometrical purposes. With the former approach the governing equations are approximated on the boundary particles whereas with the latter approach line points serve the function of geometrical points for the generation of a set of fictitious particles within the truncated kernel support of the solid boundary.
The representation of the solid wall boundary using line points or virtual boundary particles has gained popularity recently [13,16,46] . The main advantage of this approach derives from the ease that fictitious particles can be generated at runtime without the need to predefine fictitious particles within the solid wall. Furthermore, the ability to readily generate a local uniform stencil within the solid boundary is beneficial satisfying the consistency criteria associated with the kernel support resulting on zeroth and first order consistency for uniform fluid domains. Fig. 1 illustrates the main mechanism of the virtual boundary particles methodology as proposed by Fourtakas et al. [16] .
Herein, a different approach is proposed to represent the solid boundary line but retaining the aforementioned advantages of the virtual boundary particles methodology. In the proposed method, boundaries are discretized by means of triangles (in 3-D) without introducing virtual boundary particles. The triangulated surfaces can be readily used in 3-D without special treatments when discretizing arbitrary complex geometries. In this approach the discretization of the solid boundary line is independent of the particle resolution of the domain. In the absence of virtual particles, the fully uniform fictitious stencil is translated according to the position of the fluid particle as shown in Fig. 2 . Each fluid particle interacts only with triangles located within its support by using the ray casting algorithm [40] reducing the interaction drastically when generating fictitious particles. The mechanism used to complete the truncated support near the boundary using the triangles is described next.
Local uniform stencil (LUST)
The idea of a local point of symmetry fictitious particle generation mechanism is not new to SPH. In Ferrari et al. [13] , Vacondio et al. [46] and Fourtakas et al. [16] virtual particles are used to generate a set of uniform fictitious particles to complete the truncated kernel support depending on the distance of the fluid particle to the solid boundary and thus maintain zero-th and first-order consistency for uniform particle resolutions. The authors have demonstrated that the zero-th and first-order consistency for non-uniform particle resolutions is approximately satisfied.
In contrast to the work of Fourtakas et al. [16] and Vacondio et al. [46] where fictitious particles are generated depending on the distance of the fluid particle to the solid boundary, in the new scheme proposed herein, the use of virtual particles is replaced with a local uniform stencil that surrounds each fluid particle. The unique stencil is generated for an arbitrary particle at the beginning of the simulation and stored in memory.
When the support of a fluid particle is truncated by a solid wall represented by triangles, fictitious particles within the LUST stencil located within the fluid domain are discarded and only fictitious particles located within the solid wall actively contribute to the summations in Eqs. (8) and (9) . Fig. 2 demonstrates a 3-D example where a fluid particle (shown in blue) is located near a corner. The particles shown in dark red are the fictitious particles belonging to the LUST stencil that are located inside the boundary and will therefore contribute to the summations as described below.
The main difference with the previous methods is that the distance from the fluid particle to the fictitious particle is constant and depends on the particle spacing x and kernel characteristics only, resulting in a uniform particle distribution within the solid wall. An example of a stencil generated for a fluid particle located at a distance 2 x > d > x and x > d > 0 is shown in Fig. 3 . Note that, due to the particle regular distribution, the method is able to guarantee that the zeroth-and first-order moments are equal to 1 and 0, respectively. Thus, the theoretical rate of convergence can be achieved for the SPH operators [21] . Lind and Stansby [21] and later Fourtakas et al. [15] showed that maintaining a regular particle distribution improves the accuracy and the convergence rate of SPH interpolation. Thus, herein we maintain the uniform distribution near the boundary and non-uniform distribution of fluid particles within the domain due to the Lagrangian nature of the formulation.
Each fictitious particle must be given values of particle properties in order to impose boundary conditions for the velocity and pressure of the fluid at the boundary. To ensure mass conservation and satisfy Eq. (8) , the mass of the fictitious particle is equal to the where the subscript f refers to the fictitious particle.
To impose no-slip boundary conditions the velocity of the fictitious particles u f is assigned according to Takeda et al. [45] method by where the subscript v denotes the virtual wall, r vf is the perpendicular distance between the fictitious particle and the boundary triangle according to r vf = r vf •n where n is the normal to the surface pointing into the fluid. Similarly, r iv is the perpendicular distance between the fluid particle i and the boundary triangle as shown in Fig. 4 and u f and u v is fluid particle velocity and the physical wall velocity, respectively. A similar formulation to Eq. (20) is used for the pressure of the fictitious particles with the added correction for the hydrostatic pressure which ensures that ∂ P/ ∂ n → ∞ as the particle distance r iv → 0 from the solid wall is going to zero, imposing a no-penetration boundary condition on the solid wall by a repulsive pressure.
Finally, the density of the fictitious particles is a function of the pressure of Eq. (21) according to the equation of state Eq. (7) In Table 1 the pseudocode of the LUST algorithm is shown. For each particle interaction the LUST stencil is moved accordingly to the position of the fluid particle array pa[ i ] (line 2), then the summation over all active fictitious particles of the stencil starts with the loop at line 3. For each j th fictitious particle of the stencil the ray casting algorithm is used to assess whether it is inside or outside the fluid region. If the j th particle is outside the fluid region, then physical quantities are assigned to the fictitious particle (line 13) by Eqs. (19) - (22) and finally the contributions to the summation of Eqs. (8) and (9) of particle j are computed (line 14).
It should be noted that within this work the boundary treatment avoids the need to correct the kernel or gradient to account for the missing kernel support. The applicability of the methodology is demonstrated for both 2-D and 3-D geometries.
Implementation on GPU
Recently, the emergence of graphic processing units (GPUs) for scientific computing has enabled acceleration of massively dataparallel computations. The architecture of GPUs is particularly suitable for SPH simulations as an energy efficient and cost effective Fig. 4. Fluid particle support generation and normal distances from the fluid particle to the virtual wall and fictitious particle. [24] . The attraction comes from the parallel, multithreaded many-core computing architecture of NVIDIA GPU cards which is well suited for HPC applications in general. Different SPH codes have been parallelized to exploit the computational power of GPUs [9,18] .
The GPU memory is organized in several different types that can be used by programmers to achieve high Computation to Global Memory Access (CGMA) ratio and thus high efficiency of the solver. Variables which are stored in registers and in shared memory can be accessed with low latency in a highly parallel manner. While registers are private to each thread, shared memory can be accessed by all threads that belongs to the same block. However, registers and shared memory sizes are limited, ranging from a few bytes to kilobytes, respectively. On the other hand the global memory is "off-chip" with slower communication. Finally, the constant memory allows read-only access by the device and provides faster and more parallel data access paths than the global memory. Therefore, appropriate implementation of the numerical model can have a big effect on the speedup achieved.
As described in the Section 4 , each fluid particle has a predefined stencil of fictitious particles. This predefined stencil moves with the fluid particle throughout the simulation but does not change during the simulation so it is created and stored in the global GPU memory only once at the beginning. When computing acceleration for a fluid particle located close to the wall, a test should be performed to determine which fictitious particles in the predefined stencil are located in the boundary region. To determine whether a fictitious particle lies in the boundary region, it is necessary to check if the line segment connecting the fluid particle to the fictitious one intersects any of the triangles that define the wall. If so, the triangle with the intersection point closest to the fluid particle is chosen. In any case, the ray casting algorithm [40] in 3-D is used to determine if the fictitious particle is valid, whether it lies in the boundary region or not. The number of triangles required to define complex geometries can be significantly high for fine resolutions. In order to reduce the number of triangles included in the test, the neighbour-list algorithm [11] used for the simulation is altered to include a list of triangle neighbours in each neighbour-list cell. The list of triangles in each cell is created and stored in the GPU memory at the beginning of the simulation. However, the list must be updated if the boundary position undergoes displacement. Achieving an efficient GPU implementation for the LUST algorithm is a complex task due to multiple loops in the code and memory accesses required to determine the fictitious particles. An option to increase the performance is to store the relevant triangle information in shared memory but the limiting factor is the restricted size of the shared memory per multiprocessor (less than 64 Kbytes for compute capability 6.0 or higher for NVIDIA GPUs) referred to as "OneStep".
Another option to increase the GPU efficiency is by splitting the force computation summation into two different CUDA kernels (referred to as "TwoStep") for each fluid particle. In the first CUDA kernel, the boundary triangle that intersects the segment joining the fluid and the fictitious particle is defined (if present). The interaction with the fluid particle is then performed using the second CUDA kernel. This process significantly reduces the code complexity, decreases the register occupancy and minimizes irregular memory access. Also, a combination of the "TwoStep" algorithm with the shared memory is possible. In the latter case, the GPU memory increases since the triangle for each point of the predefined stencil needs to be stored in the global memory for every fluid particle (referred to as "OneStepShared", "TwoStepShared"). Results for performance and memory usage have been analysed using a 3-D dam-break impact with obstacle test case (SPHERIC benchmark test #2) presented in Section 6.3 . The results are compared with the DBC [7,9] currently available in the open-source DualSPHysics in Fig. 5 using an NVIDIA Tesla K20c GPU card. The DBC is faster than the new approach and the speedup is as seen in Fig. 5 . Nevertheless, results obtained in Section 6 with LUST in comparison to the DBC for the 3-D dam break show better agreement with the experimental data. On the other hand, the LUST boundary condition is also more memory-consuming than DBC, shown in Fig. 6 . Figs. 5 and 6 present results for the first version ("OneStep"), with the improvement when using two steps ("TwoStep") and with the use of shared memory ("OneStepShared", "TwoStepShared").
Poiseuille flow in 2-D
To test the accuracy and convergence characteristics of the LUST boundary conditions in the absence of gravity and its ability to impose a non-slip boundary condition, a laminar Poiseuille flow is [37] is used to model the viscous forces. The particle resolution for this test case is x = 0.2, 0.1 and 0.05 m resulting in 110, 420 and 1640 particles, respectively. The same case has been simulated by numerous researchers including Ferrand et al. [12] . The adopted models and parameters for the SPH model are shown in Table 2 . Fig. 7 shows the velocity field for the test case with x = 0.05 m after steady state has been reached at t = 15 s along with the absolute velocity error. It is notable that the majority of the absolute error is not close to the wall boundaries but instead in the interior fluid domain. Note, that the SPH formulation that has been used in this test case is the classical SPH formulation [34] without the use of density diffusion so that any errors near the wall boundary are evident which would not be visible otherwise by using a diffusion term or other density filtering techniques. Fig. 9 with a satisfactory order of convergence of 1.55.
Poiseuille flow in 3-D
Although the 2-D Poiseuille flow demonstrated good accuracy and satisfactory convergence, the purpose of the LUST methodology is its ability to handle arbitrary 3-D geometries such as sharp corners as demonstrated in the 3-D still water test cases below and curved boundaries. Therefore, the Hagen-Poiseuille 3-D flow which requires a 3-D tube in the absence of gravity is an ideal test case. The Reynolds number was set to Re = 5, with a channel diameter of d = 1 × 10 −2 m with a freestream velocity of U ∞ = 5 × 10 −2 m/s. The particle resolutions for this test case are x = 0.0 01, 0.0 0 05 and The test case configuration of the SPH model is shown Table 3 . A cross section of the velocity field of the laminar Hagen-Poiseuille flow with a Reynolds number of Re = 5 is shown in Fig. 10 for the simulation with x = 0.0 0 025 m. The flow field near the boundaries and within the domain is smooth without any spurious velocities. A plot of the absolute error of the velocity magnitude is shown in Fig. 10 (b) with less than 4% absolute error to the freestream velocity. Also, a comparison of the velocity profile with the analytical solution with satisfactory results is shown in Fig. 11 followed by the convergence behaviour at steady state at time t = 15 s with an order of convergence of 1.12 shown in Fig. 12 .
Still water in 3-D including a pyramid
To evaluate the performance of the proposed methodologies in presence of free-surface and gravity driven flow, the still water has been simulated in a 3-D square box with a pyramid at the bottom of the square tank. The pyramid was inserted to demonstrate the ability of the methodology to deal with more complex geometries, such as discontinuous points with internal and external angles and slopes within the computational domain. The classical SPH weakly compressible approach of Monaghan [34] with the corrected and uncorrected density diffusion term of Section 3 . B. has been used together with the formulation of Morris et al. [37] Table 4 lists the parameters adopted for the still water in 3-D including a pyramid. Fig. 13 shows the velocity and pressure field of the domain cross section at x = 0.5 m using the original formulation of the density diffusion term ( Eq. (13) ) and the correction proposed in Section 3.2 . based on dynamic density ( Eq. (15) ). The pyramid is shown in red color in both images. Evidently results obtained with the proposed correction are in agreement with the analytical solution and the velocity magnitude is reduced by an order of magnitude.
The non-dimensional pressure and velocity profiles at ( x,y) = L / √ 2 of the domain are shown in Fig. 14 for x = 0.001 m. Clearly, in Fig. 14 (a) the uncorrected density diffusion term shows a dip in the pressure near the wall boundary on the order of 10% of the total pressure that is eliminated in Fig. 14 (c) of Eq. (14) . A comparison near the wall is provided in Fig. 14 The convergence study on the velocity and pressure L 2 errors is shown in Fig. 15 with the reasonable order of convergence of approximately 1.3 and 1.1, respectively, for the velocity and pressure. Note that, although the order of convergence has improved marginally by the corrected density diffusion term, the reduction in error between the two density diffusion term formulations is significant on the order of one magnitude.
In addition to the pressure and velocity field and convergence study, Fig. 16 shows the variation of the total kinetic energy of the fluid particles with time for x = 0.001 m . Evidently, after the initial settling of the fluid due to the weakly compressible formulation, the total kinetic energy decays rapidly. No significant differ-
3-D dam break (SPHERIC Benchmark Test Case #2)
Although the still water test case is ideal to demonstrate the ability of the LUST to deal with hydrostatic conditions and the Poiseuille flow shows the noise of the domain and no-slip characteristics imposed on the wall boundaries, SPH is ideal to simulate applications with for non-linear behaviour such as high-speed impact flows and fragmentation in the presence of a free surface. Thus, the SPHERIC benchmark #2 has been chosen to demonstrate the ability of SPH in conjunction with the LUST wall boundaries to deal with such demanding flows.
The test case involves a breaking dam flow that further impacts a structure downstream where pressure and water height gauges have been placed. The reader is directed to the SPHERIC benchmark #2 test case [19] , for the geometrical of the configuration and pressure and water height probes. The particle resolutions for this test case are x = 0.0 02, 0.0 01 and 0.0 0 05 m. Table 5 lists the parameters adopted for the still water for the SPHERIC benchmark #2. In this case, we have used h / x = 1.3 is standard practice, particularly for computationally expensive 3-D simulations using WC-SPH such as used in DualSPHysics. Fig. 17 shows the velocity field of the domain before impact, Fig. 18 at the impact and Fig. 19 after the impact of the breaking dam on the structure. Evidently, the velocity field is smooth and no fluctuations are present in the fluid domain or near the boundaries. However, more interesting are the flow features near the structure impacted by the breaking dam. Fig. 20 shows a slight separation of the near-boundary particles on the order of x / 2. This is due to the greater velocity of particles arriving from the left flowing over particles with near zero velocity immediately adjacent to the boundary.
Without resolving the boundary layer (which would require a computationally prohibitive resolution) a simulation with a finer resolution would mitigate this effect. This is another advantage of the LUST methodology by imposing a non-permeable wall boundary with non-slip characteristics. Finally, a comparison of the pressure against the experimental results is shown in Fig. 21 and Fig. 22 shows the water height probes against the experimental results for all three different particle resolutions. The pressures and water heights are well predicted especially as the resolution of the particle spacing increases which shows convergence to the experimental results with closer agreement to advanced multi-phase solver of Mokos et al. [30] which highlights the improvements over previous wall boundary conditions used in DualSPHysics [9] . Results in Fig. 21 (d) are consistent with other authors [9] for probe P4 located on the top of the obstacle where the effects of air affect the experimental and numerical results.
To demonstrate the simulation runtimes, the computational times for the 3-D cases are shown in Table 6 for the tested resolutions. By refining the particle resolution the computational cost of a particle per time step is reducing and the computational time per time step is increasing with expected rate. For the 3-D dam break which uses sufficient number of particles (5.3 million) for the high resolution, scalability of the computational cost per time step is evident without reaching a plateau.
Conclusions and discussion
This paper has presented a new boundary treatment for freesurface hydrodynamics for SPH using a local uniform stencil (LUST) of fictitious particles that surround and move with each fluid particle. The LUST particles are only activated when they are located inside a boundary. The methodology employs a ray tracing procedure with triangles representing the geometry to identify when the LUST particles are activated. The new solid boundary formulation addresses the issues currently affecting boundary conditions in SPH, namely the accuracy, robustness and applicability and is straightforward to parallelize such as a GPU demonstrated here. A new correction to the density diffusion term treatment corrects for pressure errors at the boundary showing much closer agreement than standard δ-SPH for hydrostatic pressure distributions for the challenging case of still water in a complex 3-D geometry with a pyramid. The methodology is applicable to arbitrary complex geometries without the need of special treatments for corners and curvature. Results from 2-D and 3-D Poiseuille flows shows that the method converges demonstrating the robustness of the technique with excellent agreement for the velocity profiles. The method is finally applied to the SPHERIC benchmark of a dry-bed dam-break impacting an obstacle showing satisfactory agreement and convergence for a violent flow.
The method now stands to open the route forward as a robust and easy-to-implement boundary treatment which will be of great potential benefit to more complex SPH schemes such as variable particle resolution [47] and multi-phase flows [30,31] . Since the local uniform stencil is generated using the fluid's smoothing length h the generated support can in take any shape required by the padaptivity scheme (tetrahedral, hexahedral etc.). The resulting fictitious support can be generated locally for any h without the need of further kernel corrections. A variable resolution methodology has not been investigated here as it is the focus of future research. work is also funded by the Ministry of Economy and Competitiveness of the Government of Spain under project " WELCOME ENE2016-75074-C2-1-R ". | 8,779.4 | 2019-08-01T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
Innovative technologies in manufacturing, mechanics and smart civil infrastructure
ABSTRACT An overview on converging technologies that are the primary drivers of the 4th Industrial Revolution is presented, followed by new developments in advanced manufacturing, nano-,information-technologies and smart civil infrastructure technologies. Convergence of these transformative technologies is discussed. Emphases are on advanced manufacturing, nano mechanics/materials, sensors, structural control, smart structures/materials, energy harvesting, multi-scale problems and simulation methods.
The first 3 industrial revolutions occurred about 100 years apart and they changed the world. The 4th industrial revolution happened only a couple of decades from the 3rd one and it has already made profound impacts on the quality of life in terms of productivity, connectivity, education, and all aspects of life. Some of the examples of the latest progress are cloud computing, big data, Internet of Things (IoT), etc. According to ECN magazine 17 January 2017 issue, IoT enabled sensors will generate USD 10 B revenue globally in 2020. The following lists the attributes of the different industrial revolutions [5,17,18]: • Tyranny of Scales • Verification, Validation, and Uncertainty Quantification • Dynamic Data Driven Simulation Systems • Sensors, Measurements, and Heterogeneous Systems • New Vistas in Simulation Software • Big Data and Visualization • Next Generation Algorithms According to the Moore's Law, the computer speed double every 18 months over the last 30 plus years. However the software usually lags behind the hardware. The following figure ( Figure 1) is a rare example where the software is leading [26].
Multi-scale problems
Nanotechnology is a very efficient way in the creation of new materials, devices and systems at the molecular levelphenomena associated with atomic and molecular interactions strongly influence macroscopic material properties with significantly improved mechanical, optical, chemical, electrical and other properties [1,3,25,27]. NSF former Director Rita Colwell in 2002 declared, 'nanoscale technology will have an impact equal to the Industrial Revolution'. However, nanotechnology has to scale up to make useful systems and devices, hence we need to study the multi-scale problems [20,[28][29][30][31][32]. In 2000 Boresi and Chong in an earlier edition of an Elasticity text [15] listed the following Table 1, detailing the different scales and their related topics.
Basically, there are two major methods of multi-scale modeling: sequential and concurrent. The following are the pros and cons of both methods [33], • Sequential Multiscale Modelling (pro) the idea is straightforward; the theories/principles at each level are mature (e.g. continuum mechanics, molecular dynamics, quantum mechanics, etc.), and therefore we just adopt different theories/principles at different scales and passed information in a bottom-up way or top-down way. (con) the connection between different length scales is weak, since not all information at one scale can be totally passed to its higher or lower scale.
• Concurrent Multiscale Modeling (pro) can solve the problem efficiently while still maintain high resolution at critical region (con) some problems cannot be solved well. The typical un-solved challenge is how crack/dislocation/heat can propagate from the critical area to non-critical area.
Advanced manufacturing
According to the National Science Foundation, advanced manufacturing enables innovation capacity for manufacturing by emphasizing research on: (1) depend on the use and coordination of information, automation, computation, software, sensing, and networking, and/or (2) make use of cutting-edge materials and emerging capabilities enabled by the physical and biological sciences, for example nanotechnology, chemistry, and biology.
It involves both new ways to manufacture existing products, and the manufacture of new products emerging from new advanced technologies [34,35]. Cloud manufacturing is also an enabling tool [36].
Scalable nano-manufacturing (NSF 15-107; www.nsf.gov) is a NSF research initiative to overcome the key scientific and technological barriers that prevent the production of useful nanomaterials, nanostructures, devices and systems at an industrially relevant scale, reliably, and at low cost and within environmental, health and safety guidelines.
One of the shortcomings of 3-D printing [37] is the weak interfacial strength between layers of materials printed. The properties along the layers are different than those across the layers, behaving like a transversely isotropic material [15]. To overcome this weakness, vertical reinforcements can be pre-positioned like reinforcement in a concrete column shown below ( Figure 2). As for the 4-D printing [38], instead of building static 3-D items from layers of plastics, metals or other materials, 4-D printing employs dynamic materials, such as piezoelectric materials, that continue to evolve in response to their environment after fabrication.
Smart infrastructure
Smart structures refer to next-generation structures with self-diagnosis and prognosis, self-healing and repair, self-powered, and self-adaption abilities by integrating the technological advances in smart materials, smart sensors, structural health monitoring, structural control, and artificial intelligence. Smart structure technology may considerably enhance the functionality, reliability, safety and longevity of civil and mechanical structures. This section reviews the recent advances in the fields of structural health monitoring, structural control and energy harvesting.
Structural health monitoring
A sensory system is an important factor in structural health monitoring (SHM). For example, the Tsing Ma Bridge monitoring system established in Hong Kong in 1997 has around 280 sensors, including anemometers, temperature sensors, strain gauges, accelerometers, global positioning systems (GPS), displacement transducers, and level sensors [39]. An improved SHM system installed in the Stonecutters Bridge monitoring system (Hong Kong) later contains a total of over 1500 sensors [40], as shown in Figure 3. Various types of sensors produce a wide range of information for the implementation of an effective SHM and facilitate bridge safety/reliability assessment. Figure 3. Layout of the sensory systems in Stonecutters Bridge [40].
Anemometers Anemometers (24)
With the rapid development of sensing technology, the possibilities for the application of improved SHM techniques are becoming increasingly feasible [41,42]. In a monitoring system for civil structures, sensors are primarily used to monitor three types of parameters: loading sources such as wind, seismic, and traffic loading; environmental effects including temperature, humidity, rain, and corrosion; and structural responses such as strain, displacement, inclination, and acceleration.
Since the pioneer work done by [43], fiber Bragg grating (FBG) sensors have been gain popularity in structural health monitoring because of the small size, light weight, non-conductivity, fast response, resistance to corrosion, higher temperature capability, immunity to electromagnetic noise and radio frequency interference, multiplexing and wavelength-encoded measure and information. An interrogation unit is required to address the large array of FBG sensors by using a single source. Various interrogation techniques for FBG sensors were reviewed in [44,45] and introduced four standard interrogation techniques: time-division multiplexing (TDM), spatial-division multiplexing (SDM), frequency-division multiplexing (FDM), and wavelength-division multiplexing (WDM). These interrogation techniques can be used alone or in combination with the other techniques. Given that FBG sensors are very fragile in nature, sustainable encapsulation is required before such sensors are placed into a regular monitoring service. Another attractive feature of FBG sensors is that they can serve as both the sensing element and the signal transmission medium. A great number of successful application examples in civil structures have been conducted (e.g. [44,46],).
Traditional displacement transducers include linear variable differential transformer, laser transducers, and level sensing stations, which can only be used for relative displacement measurement. Total stations provide absolute displacement measurement but are unsuitable for long-term monitoring. An emerging solution is global positioning systems (GPS). Although GPS was originally designed for navigation, the global coverage and the continuous operation in all metrological conditions make it an efficient tool for measuring both the static and dynamic displacement responses of structures. GPS is currently able to record the displacements at rates of up to 20 Hz with an accuracy of 1 cm horizontally and 2 cm vertically. Its measurement accuracy will be improved in the future with the further advancement. However, it also possesses disadvantages such as partial limitation by multipath, cycle slips, high cost, and the requirement for good satellite coverage [47].
Structural control
Structural control refers to the technology that protect structures against excessive vibrations induced by dynamic loads (e.g., construction, traffic, wind, and earthquakes) and thus prevent the damage to structural and non-structural components. In the last several decades, considerable attention has been given to a variety of structural control techniques operating in passive, active, semi-active or hybrid modes [48,49].
Passive structural control commonly adopts energy dissipation strategy through various damping devices, such as friction dampers, metallic-yield dampers, bucklingrestrained braces, viscous fluid dampers, visco-elastic dampers, tuned-mass dampers, shape memory alloy dampers, eddy-current dampers, and so on. Another strategy is to reduce the seismic input energy using base isolation systems. Base isolation or seismic isolation works by shifting a short fundamental period that is located in the dynamic excitation frequency range to a long fundamental period. Base isolation is usually used in low-to medium-rise buildings and nuclear power plants for seismic resistant design. However, base isolation systems are ineffective for wind-induced vibration mitigation [50]. The passive structural control systems do not require any external energy supply.
Active control shows an excellent control performance in comparison with relatively simple passive systems [48]. An active control system consists of a sensing system, control actuators, and a centralized controller/computer. Feed-forward or/and feedback control can be utilized in active structural control; however, feedback control is often preferable considering the difficulty in excitation measurement. Active control is often implemented by external-powered hydraulic or electromechanical actuators that apply control forces to host structures in a prescribed manner. A large power source is thus required for ensuring the active control system for large-scale structures. The practically available power source and limited peak control force required by active control systems may constrain its control performance. The first application of active control to a full-scale building was conducted in the Kyobashi Center, Tokyo, Japan, which was designed by Kajima Corporation in 1989 [51]. Two hydraulic active mass drivers (4.2 and 1.2 tons; approximately 1% of the structural mass; one for lateral motion suppression; the other for torsional motion suppression) were installed on the top floor of the 11-story structure. Although some other application cases of active control systems have been implemented(mainly in Japan), the cost effectiveness and reliability of the systems limit its wide spread acceptance in civil structures [52].
Semi-active control systems (Figure 4), which require relatively little external power and provide high reliability, are proposed to address some limitations of the active control systems. The semi-active control systems can be categorized into variable damping and variable stiffness devices. The nature of a semi-active device is adaptively adjusted to be optimal in real time based on the responses of structures or/and excitation. Variable damping systems, including variable-orifice fluid dampers, controllable friction devices, controllable fluid dampers, smart TMDs, and semi-active magneto-rheological (MR) fluid or elastomer dampers, are popular in recent years for structural vibration mitigation.
Structure
Response Excitation
Feed-forward Loop
Feedback Loop PED Figure 4. Schematic of the operation of a semi-active control system [49].
Variable stiffness devices or semi-active stiffness control devices work by tuning the stiffness of structural elements, thereby avoiding the resonant-type motion under dynamic excitations and reducing the input energy. Semi-active control systems do not add any mechanical energy to the structure, and the bounded-input and bounded-output stability of the system can be guaranteed [48]. Thus, they have received increasing interest because of their potential for a robust, reliable, and low-power structural control. The comparison among the three categories of vibration control technologies reveals that a better control performance is often associated with higher complexity and low reliability. It will be appealing in practical applications if the reliability of passive control and the performance of active control can be achieved simultaneously. In the past studies on active control, it has been noted that the linear quadratic regulator (LQR) algorithm, which is a commonly adopted optimal control theory for active dampers, may produce a damper force-deformation relationship with an apparent negative stiffness feature that benefits control performance [53]. Thus, passive negative stiffness damper (NSDs), whose force-deformation relationship is shown in Figure 5(a), may be able to achieve control performance comparable to those of active dampers. Very recently, a family of NSDs have been developed, including passive negative stiffness springs based on the snap-through behavior of a pre-buckled beam, a passive negative stiffness mechanism composed of pre-compressed springs, a friction pendulum sliding isolator with a convex friction interface, and a magnetic NSD (MNSD) with several coaxially arranged magnets [54], as shown in Figure 6. In addition to NSD, inerter dampers are also recognized as another efficient vibration isolation technology. The force produced by inerter is proportional to the relative acceleration between the device terminals [55]. Figure 5(b) shows the typical force-deformation relationship of an inerter damper, which is similar to negative stiffness characteristics but is frequency-dependent. The inerter can be made mechanically through rack-and-pinion [55] or ball-screw, as shown by Figure 7 Some researchers also developed some tuned inerter dampers by emulating the principle of TMD, including tuned viscous mass dampers [57,58], tuned mass-damper-inerter systems (TMDI) [59] and tuned inerter dampers (TID) [60]. The main advantage of inerter is that the inerter can be designed to have an apparent mass significantly larger than its actual mass. This advantage of inerter offers the potential for much higher mass ratios than those feasible for TMDs [60].
Energy harvesting
Energy harvesting is recognized as an emerging and promising technology in the next few decades [61]. Solar, wind, radio-frequency(RF) waves, and structural vibrations can provide green, sustainable, reliable and localized energy sources to low-power devices or systems, such as wireless sensors networks, semi-active controllers, alarm systems, etc. For example, Spencer et al. [62] proposed solar and wind energy harvesting as the power supplies for 113 wireless sensors in the smart monitoring system for Jindo Bridge in South Korea. The energy-harvesting performance is monitored by the wireless sensors themselves, which enables sensing nodes to manage the sensing scheme automatically with respect to battery voltage status. Hassan et al. [63] proposed an energy-harvesting wireless crack monitoring sensor powered by a solar energy harvester. A comprehensive taxonomy of different energy harvesting sources for wireless sensor power supply has been presented in [64]. Meanwhile, vibration-based energy harvesting is one of the most rapidly growing research area [65][66][67].A typical configuration is a standard linear or nonlinear oscillator, in which part of the damping energy is converted into electrical energy by appropriate transduction mechanisms, including, but not limited to, piezoelectric, electromagnetic and electrostatic transductions [68], as shown in Figure 8. Piezoelectric transducers transform mechanical strain into electrical charge, known as direct piezoelectric effect [69]. Electromagnetic transducers generate voltage due to a relative motion between magnets and coils [70]. Electrostatic transducers are to utilize the variation in capacitance that can cause voltage increment in a constrained charge system or charge increment in a constrained voltage system [71]. The corresponding damping characteristics of these three transducers are different as well. More detail information about conversion mechanism are given in [72], and related derivation for output powers and efficiencies are given by [73].
Ambient vibrations, such as vibrations of mechanical and civil structures induced by various dynamic loads, provide energy sources for vibration-based energy harvesting. A special assessment of energy harvesting potential from a variety of vibration sources has been presented in [74]. Among them, civil structural vibration shows a relatively high feasibility because dynamic loadings, such as wind, earthquake, waves, and traffics, always result in relatively high structural vibration, especially for large-scale flexible structures. For example, a case study [75] shows that a power of more than 85kW could be harvested in buildings using an appropriate method. Recently, Tang and Zuo [76] utilized a regenerative TMD to harvest vibration energy from a threestorey building prototype, about 60mW energy was harvested when a proper harmonic force was applied to the prototype building. Zhu et al. [77] developed a dualfunction EM device for simultaneous vibration control and energy harvesting. Later on, a self-powered vibration control and monitoring system (Figure 9) was developed based on energy-harvesting dampers and wireless sensors [78]. The effectiveness of energy-harvesting dampers was further illustrated in high-rise buildings during earthquake and stay cable vibration mitigation under wind loads [79,80]. In addition, Peigney and Siegert [81] employed a cantilever piezoelectric harvester to harvest traffic-induced vibration energy in a bridge. A relatively low mean power of around 0.03mW could be harvested and a controlled voltage ranging from 1.8V-3.6V was observed. Garuso et al. [82] successfully harvested as high as about 600W power from wind-induced bridge vibration through an adaptive tuned-mass energy harvester under a relatively high wind speed.
National science foundation (NSF) programs and projects
The NSF [www.nsf.gov] sensors program funds fundamental research on sensors and sensing systems. Such fundamental research includes the discovery and characterization of new sensing modalities, fundamental theories for aggregation and analysis of sensed data, new approaches for data transmission, and for addressing uncertain and/or partial sensor data. Other related programs are: Biosensing, Biophotonics or Biomedical Engineering program, and also for areas of biosensing, sensors and actuators in the Electronics, Photonics, and Magnetic Devices program or the Communications, Circuits, and Sensing-systems program.
Examples of NSF active projects on sensors and smart materials: (
Conclusions
An overview on converging technologies that are the primary drivers of the 4th Industrial Revolution is presented, followed by new developments and state of the art in advanced manufacturing, smart structures, nano-, information-technologies and sciences. Convergence of these transformative technologies is discussed, including advanced manufacturing, nano mechanics/materials, sensors, smart structures/materials, energy harvesting, multi-scale problems and simulation methods. The authors would like to thank the research communities for their input and feedback.
Disclosure statement
No potential conflict of interest was reported by the authors. | 4,043.2 | 2018-10-02T00:00:00.000 | [
"Materials Science"
] |
ECHO-ENABLED TUNABLE TERAHERTZ RADIATION GENERATION WITH A LASER-MODULATED RELATIVISTIC ELECTRON BEAM ∗
A new scheme to generate narrow-band tunable Terahertz (THz) radiation using a variant of the echo-enabled harmonic generation is analyzed. We show that by using an energy chirped beam, THz density modulation in the beam phase space can be produced with two lasers having the same wavelength. This removes the need for an optical parametric amplifier system to provide a wavelengthtunable laser to vary the central frequency of the THz radiation. The practical feasibility and applications of this scheme is demonstrated numerically with a start-to-end simulation using the beam parameters at Shanghai Deep Ultraviolet Free-Electron Laser facility (SDUV). The central frequency of the density modulation can be continuously tuned by either varying the chirp of the beam or the momentum compactions of the chicanes. The influence of nonlinear RF chirp and longitudinal space charge effect have also been studied in our article. We also briefly discuss how one may retrieve the beam longitudinal phase space through measurement of the THz density modulation.
INTRODUCTION
As been widely used in radar, security, communication, medical imaging etc, THz radiation has drawn a lot of attention all over the world.A lot of studies have been done on the generation of THz radiation.Accelerator-based free electron laser (FEL) technology, such as the 4 th generation light source, can produce radiation with high intensity, high peak power and coherence, which might be a great candidate for the generation of THz radiation.In recent years, a new scheme of employing the echo effect previously observed in hadron accelerators was proposed [1] and experimentally tested [2] to generate high harmonics of FEL radiation, and the bunching coefficients of this scheme was also analyzed in detail [3] to give us clearer picture of the mechanism behind it.Providing the advantages of the echo-enabled harmonic generation FEL such as compact size, tunable frequency, etc., a method of using two lasers to modulate the electron beam to generate density modulation at THz frequency has been proposed and analyzed in †<EMAIL_ADDRESS>Ref. [4,5].In this scheme, the relativistic electron beam is firstly sent into a modulator to interact with a laser with the wave number k 1 .Then the beam is transmitted into the following modulator to interact with another laser with the wave number k 2 .The energy modulation will be converted into density modulation when the beam passes through a dispersion section.In this situation, the density modulation at wave number k = nk 1 + mk 2 can be achieved, where n and m are non-zero integers.By varying the wavelength of those two lasers and the parameter of dispersion, one can vary the central frequency of the beam density modulation conveniently.In this scheme, an optical parametric amplifier (OPA) is needed to provide two different lasers (e.g.800 nm and 1560 nm in this scheme) to modulate the electron beam respectively.As briefly mentioned in [5], one may generate the continuously tunable density modulation with two lasers having the same wavelength, which is different from the scheme mentioned above, if there is a chicane between the two modulators, similar to the echo-enabled harmonic generation scheme.A chirped electron beam is modulated in the modulator at wave number k 0 .The wave number of the beam density modulation will be compressed to k 1 = C 1 k 0 and its high harmonics when the beam passes through the first dispersion section with small R 56 .Then the beam will be sent into another same modulator to interact with a laser at the same wave number k 0 .After the beam passing through the second dispersion section, the energy modulation is converted into density modulation at the final wave number k = nC 1 C 2 k 0 + mC 2 k 0 , where n and m are nonzero integers.It is easy to find out that one may vary the final frequency of density modulation continuously by varying the chirp of the beam or R 56 of the first dispersion section.
GENERATION OF DENSITY MODULATION IN THEORY
As introduced in Sec.I (more details, see Ref. [4]), the layout of the scheme to generate THz density modulation in a relativistic electron beam with two different lasers at wave numbers k 1 and k 2 is shown in Figure 1(a1), which consist of two modulators and a dispersion section.The beam is modulated by two different lasers at wave numbers k 1 and k 2 when the beam passes through two modulators respectively.After the beam passes through the dispersion Consider an initial Gaussian energy distribution as where N 0 is the number of electrons per unit length, p = (E − E 0 )/σ E represents the dimensionless energy deviation with central energy E 0 and slice energy spread E .
After the beam passes through the dispersion section, the distribution function becomes whereA 1 = ∆E/σ E , A 2 = ∆E/σ E represent the energy modulation amplitude of each modulator and The bunching factor b n,m at each harmonic, defined as As a comparison, we show the new scheme to generate THz density modulation, which is briefly mentioned in [5], in Figure 1(a2).It is easy to find out that there is a dispersion section between two modulators, which is similar to the Echo-Enabled Harmonic Generation (EEHG) scheme.Based on Eq. 1, the density modulation in the beam is achieved at the exit of the second dispersion section at the wave number where C 1 and C 2 are the bunch compression factor provided by the first and the second dispersion section.The advantage of this new scheme over the old one is that we can use one laser with the beam split into two identical pulses to modulate the electron beam twice in two modulators respectively when there is energy chirp in the beam.
As the result, the complexity of the whole apparatus is reduced.In the following, we will discuss the detail of the new scheme.
Consider an initial Gaussian energy distribution function with a linear energy chirp h as where H = h k0σ E /E0 is the dimensionless energy chirp and ξ = k 0 z, and h = dγ γdz is defined as the energy chirp along the beam.
The derivation in Appendix A of Ref. [6] shows the bunching factor at the harmonic factor a after the beam, without an energy chirp, passing through two modulators and two dispersions as again, K = k 1 /k 2 and k 1,2 are the seed laser wave numbers of each modulator, By changing the integration variable from p to p = p − Hξ, and for the two identical wavelengths, define k 0 = k 1 = k 2 , therefore in Eq. 7, K = 1, we conclude that the angular bracket denoting averaging over ξ does not vanish only if and the bunching factor becomes where J n,m is the Bessel function of the first kind.According to Eq. 5, when it is straightforward to see that with n = 1, m = −1, and , if an 1047 nm laser is employed to modulate the beam, an about 3 THz density modulation can be generated and its THz radiation spectrum can be easily measured by the interferometer.
GENERATION OF DENSITY MODULATION IN A START-TO-END SIMULATION AND THZ RADIATION
The beam line of SDUV FEL facility, which is used to carry out the Echo-Enabled Harmonic Generation (EEHG) We use ASTRA to generate a Gaussian beam with the charge of 200 pC and the peak current of about 50 A from the photocathode RF gun and is accelerated to about 30 MeV with an S-band linac structure, considering the strong space charge effect in the injector.Simulation with ELEGANT code begins when the beam passes through L1 (consist of 2 S-band) and finishes after the beam passing through the second dispersion section (BC2, seen in Figure 2).After being accelerated to 160 MeV, the beam with energy chirp h = 11.22 m −1 is sent into the modulator to interact with a 1047 nm laser.To gain the maximal bunching according to the theory, it is optimal to regulate the laser power at ∆E 1 = 25 keV while the slice energy spread of the beam is about 5 keV.So is the same with the second modulator at∆E 2 = 50 keV.The main parameters in the simulation are listed in Table 1.
In the following simulation, we make a comparison among the laser off, A 1 = 5, A 2 = 5 and A 1 = 5, A 2 = 10.The beam currents and their Fourier Transform images are shown in Figure 3, from which we could see that the scheme could provide fundamental frequency of THz when A 2 = 5 and higher harmonic when A 2 = 10 to satisfy the needs of various users.In Figure 3, we could also see that the wavelength of density modulation in the left side is longer than that in the right side.The reason is that there is not a harmonic linearizer (e.g. an X-band cavity in FEL linac) in SDUV facility so that a nonlinear curvature exists in the longitudinal phase space, and as a result the electrons in the head are compressed differently to that in the tail when the beam passes through the second dispersion.Transition radiation (TR), undulator radiation, diffraction radiation and synchrotron radiation are all good methods to generate THz radiation.Considering the THz density modulation in the electron beam, transition radiation and undulator radiation are the better choices for us.In SDUV, the electron beam energy is about 160 MeV and the undulator is dedicated to generate sub-micron FEL radiation, therefore it is hard to match the resonance condition because the period and the magnet field of the undulator are both too large.
Therefore, TR is the only but even better choice because it is relatively easy and convenient to operate in our scheme.For the beam with energy 160 MeV, charge 200 pC and bunch length 2 ps and the round TR target with radius 5 mm, we calculated that the peak radiation intensity was about 0.12 µJ/THz and peak power at 4.5 THz was about 0.06 MW.The low charge of the beam leads to a low peak power of the radiation.The shape of the radiation spectrum is identical to the FFT curve shown in
1 )Figure 1 :
Figure 1: (color) layout of two schemes to generate THz density modulation in a relativistic electron beam.
Table 1 :
Main parameters in the ELEGANT simulation.Parameter Value bunch charge (nC) 0.2 beam energy (MeV) 160 transverse beam size (rms, mm) 0.2 slice energy spread (keV) 5 laser wavelength in U1 & U2 (nm) 1047 laser power in U1 (MW) 4.5 laser power in U2 (MW) 20 laser waist size (mm) 1.7 period of U1 & U2 (cm) 5 number of periods of U1 & U2 10 energy modulation amplitude in U1 (keV) 25 energy modulation amplitude in U2 (keV) 50 R 56 in U1 (mm) 0.477 R 56 in U2 (mm) 40 experiment [2], is shown in Figure 2. It consists of the injector, linear accelerator, two modulators, two dispersions and a radiator.The beam at peak current 50 A is accelerated to 160 MeV with 5 S-band (2856 MHz RF frequency) linac structures.The bunch compressor chicane in the linac is turned off in this scheme.The wavelength of seed laser in those two modulators is 1047 nm.
Figure 2 :
Figure 2: (Color) Layout of SDUV FEL facility for generation of THz density modulation in a chirped electron beam. | 2,683.8 | 2014-09-02T00:00:00.000 | [
"Physics",
"Engineering"
] |
Hypocycloid-shaped hollow-core photonic crystal fiber Part I : Arc curvature effect on confinement loss
We report on numerical and experimental studies showing the influence of arc curvature on the confinement loss in hypocycloid-core Kagome hollow-core photonic crystal fiber. The results prove that with such a design the optical performances are strongly driven by the contour negative curvature of the core-cladding interface. They show that the increase in arc curvature results in a strong decrease in both the confinement loss and the optical power overlap between the core mode and the silica core-surround, including a modal content approaching true singlemode guidance. Fibers with enhanced negative curvature were then fabricated with a record loss-level of 17 dB/km at 1064 nm. ©2013 Optical Society of America OCIS codes: (060.5295) Photonic crystal fibers; (060.2280) Fiber design and fabrication. References and links 1. P. Russell, “Photonic Crystal Fibers,” Science 299(5605), 358–362 (2003). 2. R. F. Cregan, B. J. Mangan, J. C. Knight, T. A. Birks, P. S. J. Russell, P. J. Roberts, and D. C. Allan, “SingleMode Photonic Band Gap Guidance of Light in Air,” Science 285(5433), 1537–1539 (1999). 3. F. Benabid and J. P. Roberts, “Linear and nonlinear optical properties of hollow core photonic crystal fiber,” J. Mod. Opt. 58(2), 87–124 (2011). 4. P. J. Roberts, F. Couny, H. Sabert, B. J. Mangan, D. P. Williams, L. Farr, M. W. Mason, A. Tomlinson, T. A. Birks, J. C. Knight, and P. St. J. Russell, “Ultimate low loss of hollow-core photonic crystal fibres,” Opt. Express 13(1), 236–244 (2005). 5. F. Couny, F. Benabid, P. J. Roberts, P. S. Light, and M. G. Raymer, “Generation and Photonic Guidance of Multi-Octave Optical-Frequency Combs,” Science 318(5853), 1118–1121 (2007). 6. F. Benabid, J. C. Knight, G. Antonopoulos, and P. S. J. Russell, “Stimulated Raman Scattering in HydrogenFilled Hollow-Core Photonic Crystal Fiber,” Science 298(5592), 399–402 (2002). 7. A. Argyros, S. G. Leon-Saval, J. Pla, and A. Docherty, “Antiresonant reflection and inhibited coupling in hollow-core square lattice optical fibres,” Opt. Express 16(8), 5642–5648 (2008). 8. F. Couny, P. J. Roberts, T. A. Birks, and F. Benabid, “Square-lattice large-pitch hollow-core photonic crystal fiber,” Opt. Express 16(25), 20626–20636 (2008). 9. T. Grujic, B. T. Kuhlmey, A. Argyros, S. Coen, and C. M. de Sterke, “Solid-core fiber with ultra-wide bandwidth transmission window due to inhibited coupling,” Opt. Express 18(25), 25556–25566 (2010). 10. A. Argyros and J. Pla, “Hollow-core polymer fibres with a kagome lattice: potential for transmission in the infrared,” Opt. Express 15(12), 7713–7719 (2007). 11. Y. Y. Wang, F. Couny, P. J. Roberts, and F. Benabid, “Low Loss Broadband Transmission In Optimized Coreshape Kagome Hollow-core PCF,” in Conference on Lasers and Electro-Optics (Optical Society of America, 2010), CPDB4. 12. Y. Y. Wang, N. V. Wheeler, F. Couny, P. J. Roberts, and F. Benabid, “Low loss broadband transmission in hypocycloid-core Kagome hollow-core photonic crystal fiber,” Opt. Lett. 36(5), 669–671 (2011). 13. T. D. Bradley, Y. Wang, M. Alharbi, B. Debord, C. Fourcade-Dutin, B. Beaudou, F. Gerome, and F. Benabid, “Optical Properties of Low Loss (70dB/km) Hypocycloid-Core Kagome Hollow Core Photonic Crystal Fiber for Rb and Cs Based Optical Applications,” J. Lightwave Technol. 31(16), 3052–3055 (2013). 14. A. D. Pryamikov, A. S. Biriukov, A. F. Kosolapov, V. G. Plotnichenko, S. L. Semjonov, and E. M. Dianov, “Demonstration of a waveguide regime for a silica hollow--core microstructured optical fiber with a negative curvature of the core boundary in the spectral region > 3.5 μm,” Opt. Express 19(2), 1441–1448 (2011). #197514 $15.00 USD Received 12 Sep 2013; revised 2 Nov 2013; accepted 5 Nov 2013; published 13 Nov 2013 (C) 2013 OSA 18 November 2013 | Vol. 21, No. 23 | DOI:10.1364/OE.21.028597 | OPTICS EXPRESS 28597 15. F. Yu, W. J. Wadsworth, and J. C. Knight, “Low loss silica hollow core fibers for 3-4 μm spectral region,” Opt. Express 20(10), 11153–11158 (2012). 16. Y. Y. Wang, X. Peng, M. Alharbi, C. F. Dutin, T. D. Bradley, F. Gérôme, M. Mielke, T. Booth, and F. Benabid, “Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression,” Opt. Lett. 37(15), 3111–3113 (2012). 17. A. V. V. Nampoothiri, A. M. Jones, C. Fourcade-Dutin, C. Mao, N. Dadashzadeh, B. Baumgart, Y. Y. Wang, M. Alharbi, T. Bradley, N. Campbell, F. Benabid, B. R. Washburn, K. L. Corwin, and W. Rudolph, “Hollow-core Optical Fiber Gas Lasers (HOFGLAS): a review [Invited],” Opt. Mater. Express 2(7), 948–961 (2012). 18. B. Beaudou, F. Gerôme, Y. Y. Wang, M. Alharbi, T. D. Bradley, G. Humbert, J. L. Auguste, J. M. Blondy, and F. Benabid, “Millijoule laser pulse delivery for spark ignition through kagome hollow-core fiber,” Opt. Lett. 37(9), 1430–1432 (2012). 19. B. Debord, M. Alharbi, T. Bradley, C. Fourcade-Dutin, Y. Wang, L. Vincetti, F. Gérôme, and F. Benabid, “Cups curvature effect on confinement loss in hypocycloid-core Kagome HC-PCF,” in CLEO: 2013 (Optical Society of America, 2013), CTu2K.4. 20. W. Belardi and J. C. Knight, “Effect of core boundary curvature on the confinement losses of hollow antiresonant fibers,” Opt. Express 21(19), 21912–21917 (2013). 21. S. Selleri, L. Vincetti, A. Cucinotta, and M. Zoboli, “Complex FEM modal solver of optical waveguides with PML boundary conditions,” Opt. Quantum Electron. 33(4/5), 359–371 (2001). 22. L. Vincetti, “Numerical analysis of plastic hollow core microstructured fiber for Terahertz applications,” Opt. Fiber Technol. 15(4), 398–401 (2009). 23. L. Vincetti and V. Setti, “Confinement Loss in Kagome and Tube Lattice Fibers: Comparison and Analysis,” J. Lightwave Technol. 30(10), 1470–1474 (2012). 24. L. Vincetti and V. Setti, “Extra loss due to Fano resonances in inhibited coupling fibers based on a lattice of tubes,” Opt. Express 20(13), 14350–14361 (2012). 25. E. A. J. Marcatili and R. A. Schmeltzer, “Hollow Metallic and Dielectric Wave-guides for Long Distance Optical Transmission and Lasers,” Bell Syst. Tech. J. 43(4), 1783–1809 (1964). 26. L. Vincetti and V. Setti, “Waveguiding mechanism in tube lattice fibers,” Opt. Express 18(22), 23133–23146 (2010). 27. NKTphotonics, http://www.nktphotonics.com/.
Introduction
Hollow-core photonic crystal fiber (HC-PCF) consists of an optical-guiding central air-core surrounded by an arrangement of micro-scaled silica tubes running along its length [1].Since its first experimental demonstration by Cregan et al. in 1999 [2] two main types of HC-PCF were established and their guidance mechanisms characterized [3].The first HC-PCF family guides via photonic bandgap (PBG), and holds the potential for guiding light with attenuation several orders of magnitude lower than the fundamental limit of ~0.16 dB/km in conventional optical fibers, and which is set by the Rayleigh scattering in silica.Hitherto, however, the lowest loss obtained with this type of fiber is 1.2 dB/km [4].This is set by the roughness of the air/glass interface and the presence of silica residing interface modes that couple with the core modes.This has not only limited the optical linear transmission performance but has also set several limits to the PBG guiding HC-PCF, such as poor optical power handling because of the subsequent strong overlap with the core silica-surround, and higher group velocity dispersion.Finally, another major limitation of this PBG guided HC-PCF is its limited transmission bandwidth (typically less than ~70 THz), which has hindered its use in applications where a large optical spectral band is required [3].
The second type of HC-PCF is distinguished by its broadband optical guidance and the relatively higher transmission loss-levels compared to the PBG guiding HC-PCF.This HC-PCF family guides via inhibited coupling (IC) between the cladding modes and the guided core modes.Unlike in PBG guiding HC-PCF, the cladding of the IC guiding HC-PCF doesn't exhibit any photonic bandgap but relies on a strong transverse mismatch between the continuum of cladding modes and those of the core, and the subsequent reduction of their field overlap integral.This guidance mechanism was proposed by Benabid and associates [5] to explain the salient features of Kagome-lattice HC-PCF, which was first reported in [6].Subsequently, this guidance mechanism has been demonstrated in PCF with other cladding structures [7][8][9], including solid-core fiber [9] and different materials [7,10].Despite the effort in optimizing the cladding structure to further reduce the coupling between the cladding modes and those of the core, such as thinning the silica web of the HC-PCF cladding and reducing the connecting nodes [5], the attenuation figures remained relatively high (>0.5 dB/m).
In 2010, we have proposed a new route in enhancing IC in Kagome HC-PCF and demonstrated a dramatic reduction of loss [11].This consists of core-shaping the HC-PCF to a hypocycloid-like contour (i.e. with negative curvature) [11,12] so as to minimize the spatial overlap between the high "azimuthal-like" number modes that reside in the silica coresurround and the zero-order Bessel-profiled core modes [13].The second advantage of a negative curvature core-contour is that the connecting nodes of the silica are located further away from the mode field radius of the core compared to the circular core-surround.These connecting nodes are inherent to the fiber fabrication process, and support modes with a low azimuthal number, which subsequently strongly couple to the core modes.This seminal work [11,12] has proved to enhance the coupling inhibition even with single ring cladding [14], and to exhibit loss figures as low as 30 dB/km in the IR domain at 1.55 µm [15][16][17].
These new developments call for a further understanding on the relevant physical parameters behind the confinement loss.This is of interest in the fundamental physics underlying this novel optical guidance mechanism, and to assess IC guiding HC-PCF as a potential long-haul optical fiber.Furthermore, the use of this type of fiber in high-field physics is proven to be an excellent vehicle for ultra-short pulsed high power laser pulse delivery and compression [16], and for nanosecond laser pulse delivery with energy levels higher than 10 mJ [18].
Here and in a follow-up paper we explore respectively the effect of arc negative curvature of the hypocycloid core along with that of the cladding on the confinement loss in this Kagome HC-PCF.This effect was firstly reported by the present authors in [19], and then followed by [20].In the present paper, we give a detailed account of what has been reported in [19] by reporting on experimental and numerical work showing the influence of the arc curvature of the hypocycloid-core on the loss figure of the fiber and on its modal properties.In section 2, we define the arc curvature and present the theoretical and experimental loss spectra for different arc curvatures.In section 3, we show the evolution with arc curvature of the optical power overlap between the core mode with the cladding silica struts, and of the core modal content.Finally, we present the fabrication of fiber exhibiting the largest arc curvature and a record loss level of 17 dB/km at 1064 nm.
Loss evolution with the arc curvature
Figure 1(a) shows a typical example of hypocycloid-like core HC-PCF with a Kagome lattice cladding.In the case considered here, the hypocycloid core contour results from 7 missing cells of a triangular arrangement of circular tubes.Consequently, the core contour is formed by two alternating arcs, which result in a small circle with radius R in that is tangent with the 6 most inward arcs, and a larger circle with radius R out which is tangent with the 6 most outward arcs.The curvature of the hypocycloid-like core is quantified through the parameter noted b (see Fig. 1(b)).The latter is defined as b = d/r, where d is the distance between the top of the arcs and the chord joining the nodes connecting the inward arc to its two neighboring ones and r is half the chord-length.With this definition, the "classical" Kagome fiber with a "quasi" circular core corresponds to b = 0, whilst b = 1 corresponds to a core contour with circular shaped arcs.For 0<b<1 and b>1, the inward arcs have an elliptical shape whilst the outward ones are set to have a circular shape.Figure 2(a) shows the calculated loss spectra of the HE 11 core mode for a hypocycloid-like core Kagome-lattice HC-PCF with different arc curvatures.The numerical results have been obtained through the modal solver of the commercial software Comsol Multiphysics 3.5 based on the finite-element method, with an optimized anisotropic perfectly matched layer (PML) [21], and which was already successfully applied to the analysis of loss and dispersion properties of several IC HC-PCFs [22,23].In the simulations, all the HC-PCFs have a 7-cell core defect and a Kagome-latticed cladding of 3 rings, and with strut thickness t equal to 350 nm.The HC-PCF structure has been studied for core arcs varying from b = 0 to b = 1.5.Furthermore, the core inner diameter (i.e. 2 × R in ) has been kept constant and equal to 60 µm throughout (see Fig. 2(b)).Consequently, this has an effect on the pitch Λ of the cladding which changes by Λ The calculated spectra clearly show the strong influence of the inward arc curvature on the loss confinement, and where the loss level drops from ~1000 dB/km in the case of a "quasi" circular core (i.e.b = 0) to lower than the 1 dB/km for hypocycloid core with b≥1.For a given structure, the loss spectrum exhibits a high loss spectral-region near 700 nm; corresponding to the resonance of the fundamental core-mode with the glass struts, and occurring at wavelengths j λ given by the expression where j is an integer, n gl is the refractive index of the glass forming the cladding structure and t is the thickness of the glass web which is assumed to be uniformly constant throughout the cladding structure [5].Outside these spectral regions (i.e. the spectral ranges where the silica cladding structure is antiresonant), the confinement loss exhibits an exponential-like decrease with the increase in the curvature parameter b. Figure 2(c) illustrates this trend for one wavelength in the first transmission band ( j λ = (1000 nm), and one wavelength at the second order transmission window ( j λ = 500 nm).It is noteworthy that all the confinement loss spectra exhibit relatively strong oscillations.These are attributed to resonant structural cladding features such as corners [5], and in some cases are Fano resonances [24].These oscillations deserve further study which is beyond the current scope.This trend of loss reduction with the increase in b has been experimentally confirmed with the fabrication of four Kagome-latticed fibers with different b. Figure 3(a) shows scanning electronic microscope (SEM) images of the fibers around the fiber core.Figure 3(b) shows the loss spectra in the fundamental band (i.e. for wavelengths longer than roughly twice the thickness of the silica struts [5]), obtained by a cutback measurement using a supercontinuum source, and fiber lengths were in the range of 50 to 70 m.The loss base-line level was found to be ~1300 dB/km for b = 0, 400 dB/km for b = 0.39, 200 dB/km for b = 0.68, and 40 dB/km for b = 0.75, which is in a qualitative agreement with the theoretical calculations.Care was taken in the fiber design and fabrication so as to have all the fibers with the same strut thickness of 350 nm, core diameter of 60 μm and the pitch of 21 μm within a measured relative uncertainty of less than 10%.Figure 3(c) shows, for a given wavelength of 1500 nm, a comparison between the calculated confinement loss and the measured transmission loss when b is increased.The higher level in the measured loss relative to the numerical results is likely due to the cladding imperfections such as non-uniform strut thickness.
Power overlap and modal content "cleansing" with increasing b
Figure 4 shows the evolution of the mode profile for the core fundamental mode (FM) HE 11 with the increase of b.As above, the inner radius of the fiber R in is still kept constant at 30 µm throughout all the simulations.We observe that the change of the curvature doesn't affect the mode profile (see Fig. 4(a)).More remarkably, the mode-field diameter MFD ((1/e) 2 diameter of the modal transverse profile) shows little change when b is altered from 0 to 1.5.This is illustrated by the radial profile of the mode along the two axes of the core-symmetry (i.e.along the axes of R in and R out respectively (see Fig. 1)), and where the MFD radii are shown in a vertical dashed line.
In order to gain further insight into the properties of the HE 11 in this hypocycloid-core Kagome HC-PCF, it is useful to compare its MFD to that of a dielectric circular capillary whose properties such as dispersion and guided field profile takes simple and analytical forms [25].For example in the case of a dielectric capillary with a bore radius R cap , and a dielectric index n g , the effective index and the MFD of the fundamental core-mode HE 11 are given by The results indicate that within a maximum relative error of less than 7%, one could approximate the MFD of hypocycloid-core HC-PCF to that of a capillary with an effective radius equal to that of the inner radius of the hypocycloid.Similarly, the dispersion of the HE 11 mode deviates little from that of the capillary when the curvature parameter b is increased from 0 to 1.5.Figure 4(c) shows the HE 11 mode effective index spectra for different b along with that of a capillary with a bore radius of 30 µm.The dispersion traces show that the shorter the wavelength the closer the traces are.Furthermore, outside the resonance with the glass region, one could qualitatively approximate the effective index of hypocycloid-core Kagome HC-PCF fundamental mode to that of a dielectric capillary of bore radius equal to the inner radius of the hypocycloid-core.
A direct consequence of the above-mentioned properties of the hypocycloid-core Kagome HC-PCF fundamental core-mode is that its spatial optical power overlap (SPOPO) with the silica core-surround is reduced when the core shape is changed from a circular contour to a hypocycloid one.This is expected from a simple assessment of the geometrical overlap between the zero-order Bessel shaped HE 11 mode and the core-contour at radius R in [9,12].A numerical corroboration is shown in Fig. 5, which gives, for a wavelength of 1 µm, the evolution with b of the fractional optical power residing in the cladding silica web for the core fundamental mode HE 11 and the first four higher order modes (HOM) (represented only by one curve for better visibility as the curves for these four HOM are so closed to each other), which consist of the two polarizations of the HE 21 mode, the TE 01 and TM 01 modes.The fractional power in silica η was deduced numerically using the following expression: where Z p is the longitudinal component of the Poynting vector, while si S and S ∞ indicate integration over the silica region and the whole cross section, respectively.The results show that the relative power ratio for the HE 11 decreases by a factor of ~10 when b is increased from 0 to 0.5, and then decreases at a lower rate when b is increased from 0.5 to 2. The HOM fractional power in silica follows the same trend for the b in the range of 0-1.5.However for b>1.5 the overlap with silica strongly increases.This is due to the coupling between the HOM core modes and the inner arc hole modes (HM), that is the modes confined inside the holes of the large arcs surrounding the core.Figure 6(a) shows spectra of the effective index difference between the HOMs and the HM for b = 1.0, b = 1.5 and b = 1.9.The curves clearly show that increasing b from 1 to 1.9 reduces the index difference between the HOM and the HM, thus favoring coupling between them by virtue of phase matching.Indeed, as b increases, the hole size of the large arc increases, and thus the HM effective index increases and approaches that of the fiber core.This is also illustrated in the inset Fig. 6 which shows the intensity profile evolution of one of the HOM core modes (here HE 21 mode) at 1 µm for b = 1.0, b = 1.5 and b = 1.9.For b = 1.9, the HE 21 intensity profile differs strongly from those in the case of b = 1.0 and b = 1.5, and show a strong hybridization with the HM.This has already been experimentally observed in [5] and further investigated in [13,26].As a result, since the HMs are much less confined than the HOM, the hybridization causes an increase in both the fraction power in silica and the confinement loss of HOM.The coupling between the HOM and HM is also indicative of the strong coupling inhibition between the core modes and the silica strut modes [5].The decrease of the SPOPO between the fundamental core modes and the silica core-surround in the case of the hypocycloid-shaped core HC-PCF corroborates the qualitative picture reported in [11][12][13].This also explains the high power handling demonstrated recently in [16].It is noteworthy, that the decrease in the spatial overlap between the core guided modes and silica core-surround is not the only mechanism behind the confinement loss decrease by increasing b.Indeed, this can be observed in the decrease rate difference between the strong confinement loss (Fig. 2(c)), and that of the SPOPO when b is increased above 0.5 (Fig. 5).Furthermore, in addition to the SPOPO decrease between the core modes and those of the core silica-surround, the increase of b also results in a reduction of the overlap integral between the fiber core-modes and those of the silica core-surround via symmetry-mismatch (i.e.transverse phase-mismatch).Figure 7 summarizes the evolution around 1 µm wavelength of a representative silica core-surround mode when b is increased from 0 to 1.9 whilst keeping R in to 30 µm. Figure 7(a) shows the intensity profile of a mode with closest effective index to that of HE 11 mode for different b.For each b, the profile shows the expected silicaresiding mode with a fast-oscillating transverse-phase [5].The latter is quantified by an azimuthal-like number m which corresponds to the number of phase oscillations along the silica core-contour.Figure 7(b), which shows the evolution of m with b increase, clearly indicates that as b is increased, m is increased exponentially, and subsequently the overlap integral between the core mode and the silica core-surround is strongly decreased.The increase in m with b results from the increase in the perimeter of the hypocycloid contour as b is increased, as is illustrated by the red trace in Fig. 7(b).The net result of the azimuthal-like number increase is an enhanced coupling inhibition between the fundamental core mode and the cladding modes, and subsequently a decrease in the experimentally observed confinement loss.Finally, the increase of b carries a third merit as it involves the suppression of the higher order modes via propagation loss enhancement.Figure 8(a) shows the spectra loss for the four HOMs versus b at 1 µm wavelength (represented only by one curve for better visibility as the curves for these four HOM are so closed to each other).Unlike the HE 11 loss, which keeps decreasing with b, the HOM confinement loss shows an increase for b larger than 0.5, which is due to the coupling with the HM as mentioned above.This explains the single modedness observed in [16].Furthermore, the confinement loss "extinction-ratio" between the HE 11 mode and the HOM increases from 0 dB for b<0.5 to 7 dB for b = 1, reaching the staggering figure of >100 dB for b = 1.9.These results clearly show that in order to have a near singlemode guidance, b should be much higher than 0.5.This is illustrated experimentally by inspecting the near-field and far field of the output of 1064 nm laser beam from 3m-long of two fabricated HC-PCFs with a b = 0.39 and b = 1 (see Fig. 8(b)).It is noteworthy that the HC-PCF with b = 1 was obtained with a silica strut thickness of 1400 nm instead of t = 350 nm, which is the thickness of all the fabricated fibers mentioned above.This was dictated by the difficulties in drawing thin strut HC-PCF with enhanced negative curvature.Indeed, in the above, the highest achieved b parameter without altering the desired structure was limited to 0.75.Furthermore, we experimentally observed that with such thin struts, the higher-order transmission bands exhibit stronger transmission loss (>1 dB/m) than what is numerically predicted [2].We believe that this is due to an enhanced capillary wave effect, and hence the fiber core-wall surface roughness during the fiber draw.Further investigations are required to assess the source of the higher loss in the short wavelength (< 700 nm in the case of our 350 nm thick strut HC-PCF), and the feasibility of enhanced negative curvature with the stack-and-draw process used here.
In order to achieve higher values than this experimental limit whilst keeping the same fabrication process, a new design of fiber has been explored based on thicker struts so as to mitigate the induced surface roughness mentioned above, and with the aim to optimize the loss for a wavelength around 1 µm.Numerical simulations show that ~1 dB/km loss-level could be obtained with thicker struts if b = 1 could be experimentally achieved (see Fig. 9(a)).
The figure shows calculated loss spectra for three different t values, t = 350 nm, 800 nm and 1400 nm.Here the inner core radius has been set to be the same as the simulations above for the case of thinner struts (i.e.R in = 30 µm).An attenuation level of ~1 dB/km at 1 µm was numerically predicted for the three fibers but in the fundamental band for t = 350 nm, in the second band for 800 nm and in the third for 1400 nm.It is noteworthy, that whilst thickening the struts would reduce the bandwidth of each transmission band, the fiber total bandwidth, unlike the PBG HC-PCF, remains unrestricted.Furthermore, this relative loss in bandwidth could be compensated by a more uniform strut thickness throughout the whole cladding structure, which narrows the high loss bands [5].
Figures 9(b) and 9(c) correspond to the loss spectra of two fabricated fibers with thicker struts with t = 800 nm and 1400 nm respectively.A maximum arc curvature of 0.9 is obtained for t = 800 nm.In contrast with thin-strut fibers, which exhibit strong attenuation in the higher-order transmission band, a loss figure as low as 80 dB/km was achieved in the first high-order band.This result was further corroborated with the t = 1400 nm fiber in which the wavelength of 1 µm lies in the second higher-order band.With this fiber, an arc curvature as high as b = 1 was successfully achieved, and a record loss fiber of 17 dB/km at 1064 nm was reached.In order to put these results into perspective with the PBG guiding HC-PCF, Fig. 10 compares the loss spectra of two hypocycloid Kagome HC-PCF with the same strut thickness (here 1400 nm) and same b (here 1) but with slightly different drawing parameters, with four state-of-art commercially available PBG guiding HC-PCF [27].Three of the fibers set are 7cell PBG guiding HC-PCF whose transmission window is centered at ~800 nm, 1000 nm and 1550 nm respectively (light gray filled curves).The fourth fiber is today's record 19-cell PBG guiding HC-PCF [4] (gray filled curve).The figure shows that whilst one single IC guiding Kagome HC-PCF guides a much wider spectral window than PBG HC-PCF, its loss figures are comparable.As a matter of fact, the loss is lower for Kagome HC-PCF in the region near 1 µm and 800 nm, indicating thus that IC guiding HC-PCF could be a good alternative to PBG HC-PCF for ultra-low loss guidance HC-PCF.
Conclusion
In conclusion, we reported a systematic numerical and experimental study on the effect of the hypocycloid-core Kagome HC-PCF negative curvature on the fiber transmission loss and its modal properties.The results showed that enhancing the core-surround curvature carries three merits: (i) several orders of magnitude decrease in the fundamental core-mode confinement loss, (ii) more than tenfold reduction in the fundamental core-mode power overlap with cladding silica, and (iii) nearing a truly single-mode guidance.The results show that the transmission loss of IC guiding HC-PCF can outperform the PBG guiding HC-PCF, as demonstrated with 17 dB/km transmission loss around 1 µm spectral range in b = 1 hypocycloid-core Kagome HC-PCF, whilst providing much larger bandwidth, a much higher optical power handling and a modal content approaching single-modedness.
Fig. 1 .
Fig. 1.(a) Structure of a hypocycloid-like core HC-PCF.(b) Definition of the parameters quantifying the curvature of the core arcs.
Fig. 2 .
Fig. 2. (a) Kagome-latticed HC-PCF computed confinement loss evolution with the arc curvatures (b = 0, 0.2, 0.5, 1 and 1.5).The dashed lines are added for eye-guidance.(b) The fiber structure transverse profile for the different b values.(c) Evolution with b of the transmission loss figures for 1000 nm (joined solid squares) and for 500 nm (joined open circles) wavelengths.
Fig. 3 .
Fig. 3. SEM images (a) and measured loss spectra (b) of fabricated hypocycloid-core Kagomelatticed HC-PCFs with different b.(c) Experimental and theoretical evolution of the transmission loss with the b at 1500 nm.
Fig. 4 .
Fig. 4. (a) Evolution with b of HE 11 mode profile at 1000 nm: Radial profile of the mode intensity along the two symmetry axes (lhs), and the 2D transverse intensity profile (rhs).(b) Evolution with b of the relative error in the MFD of hypocycloid-core Kagome HC-PCF when approximated to that of a capillary, at 1000 nm.(c) Effective index spectrum of a capillary with R cap = 30 µm (grey dashed curve), and for a 30 µm inner radius hypocycloid-core Kagome HC-PCF with different b (solid curves).
Figure 4 (
Figure 4(b) shows the relative error, ( ) / , cap HCPCF cap MFD MFD MFD − when the hypocycloid-core Kagome HC-PCF mode field-diameter, MFD HCPCF , is approximated to that of a capillary, MFD cap , with its radius R cap = R in .Here, the MFD HCPCF is deduced from the numerically calculated effective area A eff by. 2 / .HCPCF eff MFD
Fig. 5 .
Fig. 5.The fractional optical power residing in the cladding silica, for a wavelength of 1 µm, for the core fundamental mode HE 11 (black trace) and for the first four higher order modes: the two polarizations of the HE 21 mode, TE 01 , TM 01 (red curve).
Fig. 6 .
Fig. 6.Evolution with b of the effective index difference between the core first higher order modes (HOM), and the large arc mode (HMs) for b = 1.0, b = 1.5 and b = 1.9.In inset: intensity profile of the HE 21 mode at 1 µm for the different b values.
Fig. 7 .
Fig. 7. (a) Profile of a representative silica core-surround mode for b = 0 (top left), b = 0.5 (top right), b = 1 (bottom left) and b = 1.5 (bottom right) at 1 µm.In inset: zoom-in of the cladding profile on a small section of the first inner arc.(b) Evolution of the azimuthal-like number m and the perimeter of the silica core-surround contour with b.
Fig. 8 .
Fig. 8. (a) Calculated spectra loss at 1 µm for the fundamental core-mode HE 11 and for the first four higher order modes.(b) Measured fundamental core-mode near field and far field at 1064 nm for 2 fabricated fibers with b = 0.39 and b = 1.
Fig. 10 .
Fig. 10.Comparison of the loss spectra of the current lowest loss 19-cell BPG HC-PCF centered at 1550 nm (A), and three state-of-art 7-cell PBG HC-PCF centered at 800 nm (B), 1000 nm (C) and 1550 nm (D) with the two different hypocycloid-core Kagome HC-PCF with b = 1 (blue and green solid curves). | 7,315 | 2013-11-18T00:00:00.000 | [
"Physics"
] |